diff --git a/docs/_posts/ahmedlone127/2024-09-03-bge_base_securiti_dataset_1_v19_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-03-bge_base_securiti_dataset_1_v19_pipeline_en.md new file mode 100644 index 00000000000000..316cc03d44c837 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-03-bge_base_securiti_dataset_1_v19_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English bge_base_securiti_dataset_1_v19_pipeline pipeline BGEEmbeddings from MugheesAwan11 +author: John Snow Labs +name: bge_base_securiti_dataset_1_v19_pipeline +date: 2024-09-03 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BGEEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bge_base_securiti_dataset_1_v19_pipeline` is a English model originally trained by MugheesAwan11. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bge_base_securiti_dataset_1_v19_pipeline_en_5.5.0_3.0_1725357366801.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bge_base_securiti_dataset_1_v19_pipeline_en_5.5.0_3.0_1725357366801.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bge_base_securiti_dataset_1_v19_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bge_base_securiti_dataset_1_v19_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bge_base_securiti_dataset_1_v19_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|381.4 MB| + +## References + +https://huggingface.co/MugheesAwan11/bge-base-securiti-dataset-1-v19 + +## Included Models + +- DocumentAssembler +- BGEEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-03-burmese_translation_helsinki_en.md b/docs/_posts/ahmedlone127/2024-09-03-burmese_translation_helsinki_en.md new file mode 100644 index 00000000000000..30ce8c6b13b880 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-03-burmese_translation_helsinki_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_translation_helsinki MarianTransformer from duwuonline +author: John Snow Labs +name: burmese_translation_helsinki +date: 2024-09-03 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_translation_helsinki` is a English model originally trained by duwuonline. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_translation_helsinki_en_5.5.0_3.0_1725345497152.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_translation_helsinki_en_5.5.0_3.0_1725345497152.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("burmese_translation_helsinki","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("burmese_translation_helsinki","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_translation_helsinki| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|474.8 MB| + +## References + +https://huggingface.co/duwuonline/my-translation-helsinki \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-03-distilroberta_sst2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-03-distilroberta_sst2_pipeline_en.md new file mode 100644 index 00000000000000..b90deabc678480 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-03-distilroberta_sst2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilroberta_sst2_pipeline pipeline RoBertaForSequenceClassification from gokuls +author: John Snow Labs +name: distilroberta_sst2_pipeline +date: 2024-09-03 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilroberta_sst2_pipeline` is a English model originally trained by gokuls. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilroberta_sst2_pipeline_en_5.5.0_3.0_1725369518559.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilroberta_sst2_pipeline_en_5.5.0_3.0_1725369518559.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilroberta_sst2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilroberta_sst2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilroberta_sst2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|308.6 MB| + +## References + +https://huggingface.co/gokuls/distilroberta-sst2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-03-opus_maltese_thai_english_finetuned_english_tonga_tonga_islands_thai_theetawat_en.md b/docs/_posts/ahmedlone127/2024-09-03-opus_maltese_thai_english_finetuned_english_tonga_tonga_islands_thai_theetawat_en.md new file mode 100644 index 00000000000000..e11316eff977c6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-03-opus_maltese_thai_english_finetuned_english_tonga_tonga_islands_thai_theetawat_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_thai_english_finetuned_english_tonga_tonga_islands_thai_theetawat MarianTransformer from Theetawat +author: John Snow Labs +name: opus_maltese_thai_english_finetuned_english_tonga_tonga_islands_thai_theetawat +date: 2024-09-03 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_thai_english_finetuned_english_tonga_tonga_islands_thai_theetawat` is a English model originally trained by Theetawat. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_thai_english_finetuned_english_tonga_tonga_islands_thai_theetawat_en_5.5.0_3.0_1725345502547.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_thai_english_finetuned_english_tonga_tonga_islands_thai_theetawat_en_5.5.0_3.0_1725345502547.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_thai_english_finetuned_english_tonga_tonga_islands_thai_theetawat","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_thai_english_finetuned_english_tonga_tonga_islands_thai_theetawat","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_thai_english_finetuned_english_tonga_tonga_islands_thai_theetawat| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|524.2 MB| + +## References + +https://huggingface.co/Theetawat/opus-mt-th-en-finetuned-en-to-th \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-03-spoken_deberta_small_v2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-03-spoken_deberta_small_v2_pipeline_en.md new file mode 100644 index 00000000000000..826e2c205a4de8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-03-spoken_deberta_small_v2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English spoken_deberta_small_v2_pipeline pipeline DeBertaEmbeddings from viethq188 +author: John Snow Labs +name: spoken_deberta_small_v2_pipeline +date: 2024-09-03 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`spoken_deberta_small_v2_pipeline` is a English model originally trained by viethq188. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/spoken_deberta_small_v2_pipeline_en_5.5.0_3.0_1725377416913.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/spoken_deberta_small_v2_pipeline_en_5.5.0_3.0_1725377416913.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("spoken_deberta_small_v2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("spoken_deberta_small_v2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|spoken_deberta_small_v2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|387.8 MB| + +## References + +https://huggingface.co/viethq188/spoken-deberta-small-v2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-03-twitter_roberta_base_sentiment_kapiche_en.md b/docs/_posts/ahmedlone127/2024-09-03-twitter_roberta_base_sentiment_kapiche_en.md new file mode 100644 index 00000000000000..cc8e82d986050d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-03-twitter_roberta_base_sentiment_kapiche_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English twitter_roberta_base_sentiment_kapiche RoBertaForSequenceClassification from Kapiche +author: John Snow Labs +name: twitter_roberta_base_sentiment_kapiche +date: 2024-09-03 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`twitter_roberta_base_sentiment_kapiche` is a English model originally trained by Kapiche. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/twitter_roberta_base_sentiment_kapiche_en_5.5.0_3.0_1725368614329.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/twitter_roberta_base_sentiment_kapiche_en_5.5.0_3.0_1725368614329.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("twitter_roberta_base_sentiment_kapiche","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("twitter_roberta_base_sentiment_kapiche", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|twitter_roberta_base_sentiment_kapiche| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.1 MB| + +## References + +https://huggingface.co/Kapiche/twitter-roberta-base-sentiment \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-03-xlmroberta_ner_victen_base_finetuned_panx_pipeline_de.md b/docs/_posts/ahmedlone127/2024-09-03-xlmroberta_ner_victen_base_finetuned_panx_pipeline_de.md new file mode 100644 index 00000000000000..830b9985c4a678 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-03-xlmroberta_ner_victen_base_finetuned_panx_pipeline_de.md @@ -0,0 +1,70 @@ +--- +layout: model +title: German xlmroberta_ner_victen_base_finetuned_panx_pipeline pipeline XlmRoBertaForTokenClassification from victen +author: John Snow Labs +name: xlmroberta_ner_victen_base_finetuned_panx_pipeline +date: 2024-09-03 +tags: [de, open_source, pipeline, onnx] +task: Named Entity Recognition +language: de +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlmroberta_ner_victen_base_finetuned_panx_pipeline` is a German model originally trained by victen. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_victen_base_finetuned_panx_pipeline_de_5.5.0_3.0_1725348073405.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_victen_base_finetuned_panx_pipeline_de_5.5.0_3.0_1725348073405.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlmroberta_ner_victen_base_finetuned_panx_pipeline", lang = "de") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlmroberta_ner_victen_base_finetuned_panx_pipeline", lang = "de") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlmroberta_ner_victen_base_finetuned_panx_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|de| +|Size:|853.8 MB| + +## References + +https://huggingface.co/victen/xlm-roberta-base-finetuned-panx-de + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-203_hw2_branflake_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-04-203_hw2_branflake_pipeline_en.md new file mode 100644 index 00000000000000..b7a605566ae328 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-203_hw2_branflake_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English 203_hw2_branflake_pipeline pipeline RoBertaForQuestionAnswering from branflake +author: John Snow Labs +name: 203_hw2_branflake_pipeline +date: 2024-09-04 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`203_hw2_branflake_pipeline` is a English model originally trained by branflake. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/203_hw2_branflake_pipeline_en_5.5.0_3.0_1725483891814.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/203_hw2_branflake_pipeline_en_5.5.0_3.0_1725483891814.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("203_hw2_branflake_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("203_hw2_branflake_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|203_hw2_branflake_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|464.0 MB| + +## References + +https://huggingface.co/branflake/203_hw2 + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-all_mpnet_base_v2_survey3000_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-04-all_mpnet_base_v2_survey3000_pipeline_en.md new file mode 100644 index 00000000000000..8b9d8d71d552a9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-all_mpnet_base_v2_survey3000_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English all_mpnet_base_v2_survey3000_pipeline pipeline MPNetEmbeddings from zihoo +author: John Snow Labs +name: all_mpnet_base_v2_survey3000_pipeline +date: 2024-09-04 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_base_v2_survey3000_pipeline` is a English model originally trained by zihoo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_survey3000_pipeline_en_5.5.0_3.0_1725470274971.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_survey3000_pipeline_en_5.5.0_3.0_1725470274971.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("all_mpnet_base_v2_survey3000_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("all_mpnet_base_v2_survey3000_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_base_v2_survey3000_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/zihoo/all-mpnet-base-v2-survey3000 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-burmese_awesome_roberta_model_en.md b/docs/_posts/ahmedlone127/2024-09-04-burmese_awesome_roberta_model_en.md new file mode 100644 index 00000000000000..6a9615de1000fb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-burmese_awesome_roberta_model_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_awesome_roberta_model DistilBertForTokenClassification from Meli-nlp +author: John Snow Labs +name: burmese_awesome_roberta_model +date: 2024-09-04 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_roberta_model` is a English model originally trained by Meli-nlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_roberta_model_en_5.5.0_3.0_1725461290776.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_roberta_model_en_5.5.0_3.0_1725461290776.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_roberta_model","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_roberta_model", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_roberta_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|367.9 MB| + +## References + +https://huggingface.co/Meli-nlp/my_awesome_roberta_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-burmese_awesome_wnut_model_subham123_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-04-burmese_awesome_wnut_model_subham123_pipeline_en.md new file mode 100644 index 00000000000000..617187ccb0889f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-burmese_awesome_wnut_model_subham123_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_awesome_wnut_model_subham123_pipeline pipeline DistilBertForTokenClassification from subham123 +author: John Snow Labs +name: burmese_awesome_wnut_model_subham123_pipeline +date: 2024-09-04 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_model_subham123_pipeline` is a English model originally trained by subham123. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_subham123_pipeline_en_5.5.0_3.0_1725460834102.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_subham123_pipeline_en_5.5.0_3.0_1725460834102.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_wnut_model_subham123_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_wnut_model_subham123_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_model_subham123_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/subham123/my_awesome_wnut_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-chatrag_deberta_en.md b/docs/_posts/ahmedlone127/2024-09-04-chatrag_deberta_en.md new file mode 100644 index 00000000000000..abf681af928b89 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-chatrag_deberta_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English chatrag_deberta DeBertaForSequenceClassification from AgentPublic +author: John Snow Labs +name: chatrag_deberta +date: 2024-09-04 +tags: [en, open_source, onnx, sequence_classification, deberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`chatrag_deberta` is a English model originally trained by AgentPublic. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/chatrag_deberta_en_5.5.0_3.0_1725439819333.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/chatrag_deberta_en_5.5.0_3.0_1725439819333.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DeBertaForSequenceClassification.pretrained("chatrag_deberta","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DeBertaForSequenceClassification.pretrained("chatrag_deberta", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|chatrag_deberta| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|827.5 MB| + +## References + +https://huggingface.co/AgentPublic/chatrag-deberta \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-craft_clinicalbert_ner_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-04-craft_clinicalbert_ner_pipeline_en.md new file mode 100644 index 00000000000000..60b7f7ab6aa2cf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-craft_clinicalbert_ner_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English craft_clinicalbert_ner_pipeline pipeline DistilBertForTokenClassification from judithrosell +author: John Snow Labs +name: craft_clinicalbert_ner_pipeline +date: 2024-09-04 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`craft_clinicalbert_ner_pipeline` is a English model originally trained by judithrosell. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/craft_clinicalbert_ner_pipeline_en_5.5.0_3.0_1725476189172.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/craft_clinicalbert_ner_pipeline_en_5.5.0_3.0_1725476189172.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("craft_clinicalbert_ner_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("craft_clinicalbert_ner_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|craft_clinicalbert_ner_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|505.4 MB| + +## References + +https://huggingface.co/judithrosell/CRAFT_ClinicalBERT_NER + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-deberta_v3_base_finetuned_bluegennx_run2_6_en.md b/docs/_posts/ahmedlone127/2024-09-04-deberta_v3_base_finetuned_bluegennx_run2_6_en.md new file mode 100644 index 00000000000000..d6e5b1d7e4c58f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-deberta_v3_base_finetuned_bluegennx_run2_6_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English deberta_v3_base_finetuned_bluegennx_run2_6 DeBertaForTokenClassification from C4Scale +author: John Snow Labs +name: deberta_v3_base_finetuned_bluegennx_run2_6 +date: 2024-09-04 +tags: [en, open_source, onnx, token_classification, deberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deberta_v3_base_finetuned_bluegennx_run2_6` is a English model originally trained by C4Scale. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_v3_base_finetuned_bluegennx_run2_6_en_5.5.0_3.0_1725475330103.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_v3_base_finetuned_bluegennx_run2_6_en_5.5.0_3.0_1725475330103.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DeBertaForTokenClassification.pretrained("deberta_v3_base_finetuned_bluegennx_run2_6","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DeBertaForTokenClassification.pretrained("deberta_v3_base_finetuned_bluegennx_run2_6", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deberta_v3_base_finetuned_bluegennx_run2_6| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|607.8 MB| + +## References + +https://huggingface.co/C4Scale/deberta-v3-base_finetuned_bluegennx_run2.6 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-deberta_v3_base_finetuned_mcqa_michaellutz_en.md b/docs/_posts/ahmedlone127/2024-09-04-deberta_v3_base_finetuned_mcqa_michaellutz_en.md new file mode 100644 index 00000000000000..914c91177495ac --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-deberta_v3_base_finetuned_mcqa_michaellutz_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English deberta_v3_base_finetuned_mcqa_michaellutz DeBertaForSequenceClassification from michaellutz +author: John Snow Labs +name: deberta_v3_base_finetuned_mcqa_michaellutz +date: 2024-09-04 +tags: [en, open_source, onnx, sequence_classification, deberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deberta_v3_base_finetuned_mcqa_michaellutz` is a English model originally trained by michaellutz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_v3_base_finetuned_mcqa_michaellutz_en_5.5.0_3.0_1725468480077.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_v3_base_finetuned_mcqa_michaellutz_en_5.5.0_3.0_1725468480077.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DeBertaForSequenceClassification.pretrained("deberta_v3_base_finetuned_mcqa_michaellutz","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DeBertaForSequenceClassification.pretrained("deberta_v3_base_finetuned_mcqa_michaellutz", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deberta_v3_base_finetuned_mcqa_michaellutz| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|579.3 MB| + +## References + +https://huggingface.co/michaellutz/deberta-v3-base-finetuned-mcqa \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-distilbert_base_uncased_finetuned_imdb_phantatbach_en.md b/docs/_posts/ahmedlone127/2024-09-04-distilbert_base_uncased_finetuned_imdb_phantatbach_en.md new file mode 100644 index 00000000000000..b2c0b8242f2a5b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-distilbert_base_uncased_finetuned_imdb_phantatbach_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_phantatbach DistilBertEmbeddings from phantatbach +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_phantatbach +date: 2024-09-04 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_phantatbach` is a English model originally trained by phantatbach. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_phantatbach_en_5.5.0_3.0_1725414185596.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_phantatbach_en_5.5.0_3.0_1725414185596.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_phantatbach","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_phantatbach","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_phantatbach| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/phantatbach/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-distilbert_base_uncased_finetuned_ner_anuroopkeshav_en.md b/docs/_posts/ahmedlone127/2024-09-04-distilbert_base_uncased_finetuned_ner_anuroopkeshav_en.md new file mode 100644 index 00000000000000..e9eeac1afe0510 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-distilbert_base_uncased_finetuned_ner_anuroopkeshav_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_ner_anuroopkeshav DistilBertForTokenClassification from anuroopkeshav +author: John Snow Labs +name: distilbert_base_uncased_finetuned_ner_anuroopkeshav +date: 2024-09-04 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_ner_anuroopkeshav` is a English model originally trained by anuroopkeshav. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_anuroopkeshav_en_5.5.0_3.0_1725448321510.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_anuroopkeshav_en_5.5.0_3.0_1725448321510.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_anuroopkeshav","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_anuroopkeshav", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_ner_anuroopkeshav| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/anuroopkeshav/distilbert-base-uncased-finetuned-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-dummy_model_dvd005_en.md b/docs/_posts/ahmedlone127/2024-09-04-dummy_model_dvd005_en.md new file mode 100644 index 00000000000000..218224502e7e54 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-dummy_model_dvd005_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_dvd005 CamemBertEmbeddings from DvD005 +author: John Snow Labs +name: dummy_model_dvd005 +date: 2024-09-04 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_dvd005` is a English model originally trained by DvD005. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_dvd005_en_5.5.0_3.0_1725408500688.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_dvd005_en_5.5.0_3.0_1725408500688.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_dvd005","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_dvd005","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_dvd005| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/DvD005/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-dummy_model_raghav0802_en.md b/docs/_posts/ahmedlone127/2024-09-04-dummy_model_raghav0802_en.md new file mode 100644 index 00000000000000..5f95d3db664d35 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-dummy_model_raghav0802_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_raghav0802 CamemBertEmbeddings from Raghav0802 +author: John Snow Labs +name: dummy_model_raghav0802 +date: 2024-09-04 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_raghav0802` is a English model originally trained by Raghav0802. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_raghav0802_en_5.5.0_3.0_1725442791921.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_raghav0802_en_5.5.0_3.0_1725442791921.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_raghav0802","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_raghav0802","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_raghav0802| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/Raghav0802/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-finer_distillbert_v2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-04-finer_distillbert_v2_pipeline_en.md new file mode 100644 index 00000000000000..4f29bc9ee7a5ff --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-finer_distillbert_v2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English finer_distillbert_v2_pipeline pipeline DistilBertForTokenClassification from HariLuru +author: John Snow Labs +name: finer_distillbert_v2_pipeline +date: 2024-09-04 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finer_distillbert_v2_pipeline` is a English model originally trained by HariLuru. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finer_distillbert_v2_pipeline_en_5.5.0_3.0_1725492678675.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finer_distillbert_v2_pipeline_en_5.5.0_3.0_1725492678675.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finer_distillbert_v2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finer_distillbert_v2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finer_distillbert_v2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/HariLuru/finer_distillbert_v2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-marianmt_igbo_best_18_10_23_ig.md b/docs/_posts/ahmedlone127/2024-09-04-marianmt_igbo_best_18_10_23_ig.md new file mode 100644 index 00000000000000..0719eb859374b9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-marianmt_igbo_best_18_10_23_ig.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Igbo marianmt_igbo_best_18_10_23 MarianTransformer from Sunbird +author: John Snow Labs +name: marianmt_igbo_best_18_10_23 +date: 2024-09-04 +tags: [ig, open_source, onnx, translation, marian] +task: Translation +language: ig +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marianmt_igbo_best_18_10_23` is a Igbo model originally trained by Sunbird. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marianmt_igbo_best_18_10_23_ig_5.5.0_3.0_1725493920539.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marianmt_igbo_best_18_10_23_ig_5.5.0_3.0_1725493920539.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("marianmt_igbo_best_18_10_23","ig") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("marianmt_igbo_best_18_10_23","ig") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marianmt_igbo_best_18_10_23| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|ig| +|Size:|531.8 MB| + +## References + +https://huggingface.co/Sunbird/MarianMT_Igbo_best_18_10_23 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-mpnet_stackexchange_v1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-04-mpnet_stackexchange_v1_pipeline_en.md new file mode 100644 index 00000000000000..b81337509c422d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-mpnet_stackexchange_v1_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English mpnet_stackexchange_v1_pipeline pipeline MPNetEmbeddings from flax-sentence-embeddings +author: John Snow Labs +name: mpnet_stackexchange_v1_pipeline +date: 2024-09-04 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mpnet_stackexchange_v1_pipeline` is a English model originally trained by flax-sentence-embeddings. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mpnet_stackexchange_v1_pipeline_en_5.5.0_3.0_1725470937827.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mpnet_stackexchange_v1_pipeline_en_5.5.0_3.0_1725470937827.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("mpnet_stackexchange_v1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("mpnet_stackexchange_v1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mpnet_stackexchange_v1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.6 MB| + +## References + +https://huggingface.co/flax-sentence-embeddings/mpnet_stackexchange_v1 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-mt5_base_qaqg_finetuned_tydiqa_indonesian_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-04-mt5_base_qaqg_finetuned_tydiqa_indonesian_pipeline_en.md new file mode 100644 index 00000000000000..7632e4d5995ff2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-mt5_base_qaqg_finetuned_tydiqa_indonesian_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English mt5_base_qaqg_finetuned_tydiqa_indonesian_pipeline pipeline T5Transformer from hawalurahman +author: John Snow Labs +name: mt5_base_qaqg_finetuned_tydiqa_indonesian_pipeline +date: 2024-09-04 +tags: [en, open_source, pipeline, onnx] +task: [Question Answering, Summarization, Translation, Text Generation] +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained T5Transformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mt5_base_qaqg_finetuned_tydiqa_indonesian_pipeline` is a English model originally trained by hawalurahman. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mt5_base_qaqg_finetuned_tydiqa_indonesian_pipeline_en_5.5.0_3.0_1725459735871.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mt5_base_qaqg_finetuned_tydiqa_indonesian_pipeline_en_5.5.0_3.0_1725459735871.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("mt5_base_qaqg_finetuned_tydiqa_indonesian_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("mt5_base_qaqg_finetuned_tydiqa_indonesian_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mt5_base_qaqg_finetuned_tydiqa_indonesian_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|2.3 GB| + +## References + +https://huggingface.co/hawalurahman/mt5-base-qaqg-finetuned-TydiQA-id + +## Included Models + +- DocumentAssembler +- T5Transformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-multilingual_e5_base_censor_v0_2_pipeline_xx.md b/docs/_posts/ahmedlone127/2024-09-04-multilingual_e5_base_censor_v0_2_pipeline_xx.md new file mode 100644 index 00000000000000..dc8c2dba4b9c1c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-multilingual_e5_base_censor_v0_2_pipeline_xx.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Multilingual multilingual_e5_base_censor_v0_2_pipeline pipeline XlmRoBertaForSequenceClassification from Data-Lab +author: John Snow Labs +name: multilingual_e5_base_censor_v0_2_pipeline +date: 2024-09-04 +tags: [xx, open_source, pipeline, onnx] +task: Text Classification +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`multilingual_e5_base_censor_v0_2_pipeline` is a Multilingual model originally trained by Data-Lab. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/multilingual_e5_base_censor_v0_2_pipeline_xx_5.5.0_3.0_1725411104894.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/multilingual_e5_base_censor_v0_2_pipeline_xx_5.5.0_3.0_1725411104894.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("multilingual_e5_base_censor_v0_2_pipeline", lang = "xx") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("multilingual_e5_base_censor_v0_2_pipeline", lang = "xx") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|multilingual_e5_base_censor_v0_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|xx| +|Size:|800.1 MB| + +## References + +https://huggingface.co/Data-Lab/multilingual-e5-base_censor_v0.2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-same_story_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-04-same_story_pipeline_en.md new file mode 100644 index 00000000000000..73e394321c60a6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-same_story_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English same_story_pipeline pipeline MPNetEmbeddings from dell-research-harvard +author: John Snow Labs +name: same_story_pipeline +date: 2024-09-04 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`same_story_pipeline` is a English model originally trained by dell-research-harvard. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/same_story_pipeline_en_5.5.0_3.0_1725469889319.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/same_story_pipeline_en_5.5.0_3.0_1725469889319.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("same_story_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("same_story_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|same_story_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/dell-research-harvard/same-story + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-sent_bert_large_arabertv02_twitter_pipeline_ar.md b/docs/_posts/ahmedlone127/2024-09-04-sent_bert_large_arabertv02_twitter_pipeline_ar.md new file mode 100644 index 00000000000000..41d0860cba82dc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-sent_bert_large_arabertv02_twitter_pipeline_ar.md @@ -0,0 +1,71 @@ +--- +layout: model +title: Arabic sent_bert_large_arabertv02_twitter_pipeline pipeline BertSentenceEmbeddings from aubmindlab +author: John Snow Labs +name: sent_bert_large_arabertv02_twitter_pipeline +date: 2024-09-04 +tags: [ar, open_source, pipeline, onnx] +task: Embeddings +language: ar +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertSentenceEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sent_bert_large_arabertv02_twitter_pipeline` is a Arabic model originally trained by aubmindlab. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sent_bert_large_arabertv02_twitter_pipeline_ar_5.5.0_3.0_1725434055536.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sent_bert_large_arabertv02_twitter_pipeline_ar_5.5.0_3.0_1725434055536.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sent_bert_large_arabertv02_twitter_pipeline", lang = "ar") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sent_bert_large_arabertv02_twitter_pipeline", lang = "ar") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sent_bert_large_arabertv02_twitter_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|ar| +|Size:|1.4 GB| + +## References + +https://huggingface.co/aubmindlab/bert-large-arabertv02-twitter + +## Included Models + +- DocumentAssembler +- TokenizerModel +- SentenceDetectorDLModel +- BertSentenceEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-04-xlm_r_galen_pharmaconer_es.md b/docs/_posts/ahmedlone127/2024-09-04-xlm_r_galen_pharmaconer_es.md new file mode 100644 index 00000000000000..82c7839a03b2cc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-04-xlm_r_galen_pharmaconer_es.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Castilian, Spanish xlm_r_galen_pharmaconer XlmRoBertaForTokenClassification from IIC +author: John Snow Labs +name: xlm_r_galen_pharmaconer +date: 2024-09-04 +tags: [es, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: es +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_r_galen_pharmaconer` is a Castilian, Spanish model originally trained by IIC. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_r_galen_pharmaconer_es_5.5.0_3.0_1725424267840.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_r_galen_pharmaconer_es_5.5.0_3.0_1725424267840.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_r_galen_pharmaconer","es") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_r_galen_pharmaconer", "es") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_r_galen_pharmaconer| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|es| +|Size:|1.0 GB| + +## References + +https://huggingface.co/IIC/XLM_R_Galen-pharmaconer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-bert_full_finetuned_ner_pablo_en.md b/docs/_posts/ahmedlone127/2024-09-05-bert_full_finetuned_ner_pablo_en.md new file mode 100644 index 00000000000000..974cbd18172759 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-bert_full_finetuned_ner_pablo_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bert_full_finetuned_ner_pablo BertForTokenClassification from pabRomero +author: John Snow Labs +name: bert_full_finetuned_ner_pablo +date: 2024-09-05 +tags: [en, open_source, onnx, token_classification, bert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_full_finetuned_ner_pablo` is a English model originally trained by pabRomero. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_full_finetuned_ner_pablo_en_5.5.0_3.0_1725538572971.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_full_finetuned_ner_pablo_en_5.5.0_3.0_1725538572971.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = BertForTokenClassification.pretrained("bert_full_finetuned_ner_pablo","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = BertForTokenClassification.pretrained("bert_full_finetuned_ner_pablo", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_full_finetuned_ner_pablo| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|407.2 MB| + +## References + +https://huggingface.co/pabRomero/BERT-full-finetuned-ner-pablo \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-burmese_awesome_wnut_model_chuhao1305_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-05-burmese_awesome_wnut_model_chuhao1305_pipeline_en.md new file mode 100644 index 00000000000000..6629791d983a6a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-burmese_awesome_wnut_model_chuhao1305_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_awesome_wnut_model_chuhao1305_pipeline pipeline DistilBertForTokenClassification from Chuhao1305 +author: John Snow Labs +name: burmese_awesome_wnut_model_chuhao1305_pipeline +date: 2024-09-05 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_model_chuhao1305_pipeline` is a English model originally trained by Chuhao1305. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_chuhao1305_pipeline_en_5.5.0_3.0_1725500729450.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_chuhao1305_pipeline_en_5.5.0_3.0_1725500729450.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_wnut_model_chuhao1305_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_wnut_model_chuhao1305_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_model_chuhao1305_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/Chuhao1305/my_awesome_wnut_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-clip_fine_tuned_satellite_en.md b/docs/_posts/ahmedlone127/2024-09-05-clip_fine_tuned_satellite_en.md new file mode 100644 index 00000000000000..f0f80be8922f3f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-clip_fine_tuned_satellite_en.md @@ -0,0 +1,120 @@ +--- +layout: model +title: English clip_fine_tuned_satellite CLIPForZeroShotClassification from NemesisAlm +author: John Snow Labs +name: clip_fine_tuned_satellite +date: 2024-09-05 +tags: [en, open_source, onnx, zero_shot, clip, image] +task: Zero-Shot Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CLIPForZeroShotClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CLIPForZeroShotClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`clip_fine_tuned_satellite` is a English model originally trained by NemesisAlm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/clip_fine_tuned_satellite_en_5.5.0_3.0_1725540117345.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/clip_fine_tuned_satellite_en_5.5.0_3.0_1725540117345.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +imageDF = spark.read \ + .format("image") \ + .option("dropInvalid", value = True) \ + .load("src/test/resources/image/") + +candidateLabels = [ + "a photo of a bird", + "a photo of a cat", + "a photo of a dog", + "a photo of a hen", + "a photo of a hippo", + "a photo of a room", + "a photo of a tractor", + "a photo of an ostrich", + "a photo of an ox"] + +ImageAssembler = ImageAssembler() \ + .setInputCol("image") \ + .setOutputCol("image_assembler") + +imageClassifier = CLIPForZeroShotClassification.pretrained("clip_fine_tuned_satellite","en") \ + .setInputCols(["image_assembler"]) \ + .setOutputCol("label") \ + .setCandidateLabels(candidateLabels) + +pipeline = Pipeline().setStages([ImageAssembler, imageClassifier]) +pipelineModel = pipeline.fit(imageDF) +pipelineDF = pipelineModel.transform(imageDF) + + +``` +```scala + + +val imageDF = ResourceHelper.spark.read + .format("image") + .option("dropInvalid", value = true) + .load("src/test/resources/image/") + +val candidateLabels = Array( + "a photo of a bird", + "a photo of a cat", + "a photo of a dog", + "a photo of a hen", + "a photo of a hippo", + "a photo of a room", + "a photo of a tractor", + "a photo of an ostrich", + "a photo of an ox") + +val imageAssembler = new ImageAssembler() + .setInputCol("image") + .setOutputCol("image_assembler") + +val imageClassifier = CLIPForZeroShotClassification.pretrained("clip_fine_tuned_satellite","en") \ + .setInputCols(Array("image_assembler")) \ + .setOutputCol("label") \ + .setCandidateLabels(candidateLabels) + +val pipeline = new Pipeline().setStages(Array(imageAssembler, imageClassifier)) +val pipelineModel = pipeline.fit(imageDF) +val pipelineDF = pipelineModel.transform(imageDF) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|clip_fine_tuned_satellite| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[image_assembler]| +|Output Labels:|[label]| +|Language:|en| +|Size:|449.8 MB| + +## References + +https://huggingface.co/NemesisAlm/clip-fine-tuned-satellite \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-clip_zabir_2_en.md b/docs/_posts/ahmedlone127/2024-09-05-clip_zabir_2_en.md new file mode 100644 index 00000000000000..4f98252d8109fc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-clip_zabir_2_en.md @@ -0,0 +1,120 @@ +--- +layout: model +title: English clip_zabir_2 CLIPForZeroShotClassification from zabir735 +author: John Snow Labs +name: clip_zabir_2 +date: 2024-09-05 +tags: [en, open_source, onnx, zero_shot, clip, image] +task: Zero-Shot Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CLIPForZeroShotClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CLIPForZeroShotClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`clip_zabir_2` is a English model originally trained by zabir735. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/clip_zabir_2_en_5.5.0_3.0_1725540627710.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/clip_zabir_2_en_5.5.0_3.0_1725540627710.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +imageDF = spark.read \ + .format("image") \ + .option("dropInvalid", value = True) \ + .load("src/test/resources/image/") + +candidateLabels = [ + "a photo of a bird", + "a photo of a cat", + "a photo of a dog", + "a photo of a hen", + "a photo of a hippo", + "a photo of a room", + "a photo of a tractor", + "a photo of an ostrich", + "a photo of an ox"] + +ImageAssembler = ImageAssembler() \ + .setInputCol("image") \ + .setOutputCol("image_assembler") + +imageClassifier = CLIPForZeroShotClassification.pretrained("clip_zabir_2","en") \ + .setInputCols(["image_assembler"]) \ + .setOutputCol("label") \ + .setCandidateLabels(candidateLabels) + +pipeline = Pipeline().setStages([ImageAssembler, imageClassifier]) +pipelineModel = pipeline.fit(imageDF) +pipelineDF = pipelineModel.transform(imageDF) + + +``` +```scala + + +val imageDF = ResourceHelper.spark.read + .format("image") + .option("dropInvalid", value = true) + .load("src/test/resources/image/") + +val candidateLabels = Array( + "a photo of a bird", + "a photo of a cat", + "a photo of a dog", + "a photo of a hen", + "a photo of a hippo", + "a photo of a room", + "a photo of a tractor", + "a photo of an ostrich", + "a photo of an ox") + +val imageAssembler = new ImageAssembler() + .setInputCol("image") + .setOutputCol("image_assembler") + +val imageClassifier = CLIPForZeroShotClassification.pretrained("clip_zabir_2","en") \ + .setInputCols(Array("image_assembler")) \ + .setOutputCol("label") \ + .setCandidateLabels(candidateLabels) + +val pipeline = new Pipeline().setStages(Array(imageAssembler, imageClassifier)) +val pipelineModel = pipeline.fit(imageDF) +val pipelineDF = pipelineModel.transform(imageDF) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|clip_zabir_2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[image_assembler]| +|Output Labels:|[label]| +|Language:|en| +|Size:|561.2 MB| + +## References + +https://huggingface.co/zabir735/clip-zabir-2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-ct_kld_xlmr_20230908_en.md b/docs/_posts/ahmedlone127/2024-09-05-ct_kld_xlmr_20230908_en.md new file mode 100644 index 00000000000000..50875b4f5e7cd9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-ct_kld_xlmr_20230908_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English ct_kld_xlmr_20230908 XlmRoBertaForQuestionAnswering from intanm +author: John Snow Labs +name: ct_kld_xlmr_20230908 +date: 2024-09-05 +tags: [en, open_source, onnx, question_answering, xlm_roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ct_kld_xlmr_20230908` is a English model originally trained by intanm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ct_kld_xlmr_20230908_en_5.5.0_3.0_1725499096691.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ct_kld_xlmr_20230908_en_5.5.0_3.0_1725499096691.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = XlmRoBertaForQuestionAnswering.pretrained("ct_kld_xlmr_20230908","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = XlmRoBertaForQuestionAnswering.pretrained("ct_kld_xlmr_20230908", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ct_kld_xlmr_20230908| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|875.2 MB| + +## References + +https://huggingface.co/intanm/ct-kld-xlmr-20230908 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-dbbert_el.md b/docs/_posts/ahmedlone127/2024-09-05-dbbert_el.md new file mode 100644 index 00000000000000..a6afb616a53bf1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-dbbert_el.md @@ -0,0 +1,92 @@ +--- +layout: model +title: Modern Greek (1453-) dbbert BertEmbeddings from colinswaelens +author: John Snow Labs +name: dbbert +date: 2024-09-05 +tags: [bert, el, open_source, fill_mask, onnx] +task: Embeddings +language: el +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dbbert` is a Modern Greek (1453-) model originally trained by colinswaelens. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dbbert_el_5.5.0_3.0_1725511714901.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dbbert_el_5.5.0_3.0_1725511714901.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +document_assembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("documents") + + +embeddings =BertEmbeddings.pretrained("dbbert","el") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([document_assembler, embeddings]) + +pipelineModel = pipeline.fit(data) + +pipelineDF = pipelineModel.transform(data) +``` +```scala +val document_assembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("embeddings") + +val embeddings = BertEmbeddings + .pretrained("dbbert", "el") + .setInputCols(Array("documents","token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(document_assembler, embeddings)) + +val pipelineModel = pipeline.fit(data) + +val pipelineDF = pipelineModel.transform(data) +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dbbert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|el| +|Size:|408.3 MB| + +## References + +References + +https://huggingface.co/colinswaelens/DBBErt \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-deberta_v3_base_prompt_injection_v2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-05-deberta_v3_base_prompt_injection_v2_pipeline_en.md new file mode 100644 index 00000000000000..1d2afae7e32c92 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-deberta_v3_base_prompt_injection_v2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English deberta_v3_base_prompt_injection_v2_pipeline pipeline DeBertaForSequenceClassification from protectai +author: John Snow Labs +name: deberta_v3_base_prompt_injection_v2_pipeline +date: 2024-09-05 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deberta_v3_base_prompt_injection_v2_pipeline` is a English model originally trained by protectai. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_v3_base_prompt_injection_v2_pipeline_en_5.5.0_3.0_1725561632245.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_v3_base_prompt_injection_v2_pipeline_en_5.5.0_3.0_1725561632245.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("deberta_v3_base_prompt_injection_v2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("deberta_v3_base_prompt_injection_v2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deberta_v3_base_prompt_injection_v2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|662.6 MB| + +## References + +https://huggingface.co/protectai/deberta-v3-base-prompt-injection-v2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-distilbert_base_uncased_finetuned_clinc_mrwetsnow_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-05-distilbert_base_uncased_finetuned_clinc_mrwetsnow_pipeline_en.md new file mode 100644 index 00000000000000..abc93427cf69f6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-distilbert_base_uncased_finetuned_clinc_mrwetsnow_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_clinc_mrwetsnow_pipeline pipeline DistilBertForSequenceClassification from MrWetsnow +author: John Snow Labs +name: distilbert_base_uncased_finetuned_clinc_mrwetsnow_pipeline +date: 2024-09-05 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_clinc_mrwetsnow_pipeline` is a English model originally trained by MrWetsnow. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_clinc_mrwetsnow_pipeline_en_5.5.0_3.0_1725507260170.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_clinc_mrwetsnow_pipeline_en_5.5.0_3.0_1725507260170.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_clinc_mrwetsnow_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_clinc_mrwetsnow_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_clinc_mrwetsnow_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.9 MB| + +## References + +https://huggingface.co/MrWetsnow/distilbert-base-uncased-finetuned-clinc + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-distilbert_base_uncased_finetuned_ner_ggital_en.md b/docs/_posts/ahmedlone127/2024-09-05-distilbert_base_uncased_finetuned_ner_ggital_en.md new file mode 100644 index 00000000000000..24d71f67904434 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-distilbert_base_uncased_finetuned_ner_ggital_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_ner_ggital DistilBertForTokenClassification from GGital +author: John Snow Labs +name: distilbert_base_uncased_finetuned_ner_ggital +date: 2024-09-05 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_ner_ggital` is a English model originally trained by GGital. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_ggital_en_5.5.0_3.0_1725495908399.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_ggital_en_5.5.0_3.0_1725495908399.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_ggital","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_ggital", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_ner_ggital| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/GGital/distilbert-base-uncased-finetuned-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-distilbert_base_uncased_finetuned_ner_nsboan_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-05-distilbert_base_uncased_finetuned_ner_nsboan_pipeline_en.md new file mode 100644 index 00000000000000..c42b27763ea927 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-distilbert_base_uncased_finetuned_ner_nsboan_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_ner_nsboan_pipeline pipeline DistilBertForTokenClassification from nsboan +author: John Snow Labs +name: distilbert_base_uncased_finetuned_ner_nsboan_pipeline +date: 2024-09-05 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_ner_nsboan_pipeline` is a English model originally trained by nsboan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_nsboan_pipeline_en_5.5.0_3.0_1725518372547.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_nsboan_pipeline_en_5.5.0_3.0_1725518372547.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_ner_nsboan_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_ner_nsboan_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_ner_nsboan_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/nsboan/distilbert-base-uncased-finetuned-ner + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-finetuning_sentiment_model_3000_samples_sab03_en.md b/docs/_posts/ahmedlone127/2024-09-05-finetuning_sentiment_model_3000_samples_sab03_en.md new file mode 100644 index 00000000000000..938886be0f6c5a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-finetuning_sentiment_model_3000_samples_sab03_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English finetuning_sentiment_model_3000_samples_sab03 DistilBertForSequenceClassification from SAB03 +author: John Snow Labs +name: finetuning_sentiment_model_3000_samples_sab03 +date: 2024-09-05 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuning_sentiment_model_3000_samples_sab03` is a English model originally trained by SAB03. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuning_sentiment_model_3000_samples_sab03_en_5.5.0_3.0_1725579852359.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuning_sentiment_model_3000_samples_sab03_en_5.5.0_3.0_1725579852359.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("finetuning_sentiment_model_3000_samples_sab03","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("finetuning_sentiment_model_3000_samples_sab03", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuning_sentiment_model_3000_samples_sab03| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/SAB03/finetuning-sentiment-model-3000-samples \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-g3_finetuned_ner_en.md b/docs/_posts/ahmedlone127/2024-09-05-g3_finetuned_ner_en.md new file mode 100644 index 00000000000000..72339ceba32aa0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-g3_finetuned_ner_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English g3_finetuned_ner DistilBertForTokenClassification from sahillihas +author: John Snow Labs +name: g3_finetuned_ner +date: 2024-09-05 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`g3_finetuned_ner` is a English model originally trained by sahillihas. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/g3_finetuned_ner_en_5.5.0_3.0_1725500919294.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/g3_finetuned_ner_en_5.5.0_3.0_1725500919294.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("g3_finetuned_ner","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("g3_finetuned_ner", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|g3_finetuned_ner| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|243.8 MB| + +## References + +https://huggingface.co/sahillihas/G3-finetuned-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-mdeberta_v3_base_caresa_es.md b/docs/_posts/ahmedlone127/2024-09-05-mdeberta_v3_base_caresa_es.md new file mode 100644 index 00000000000000..fa86c71af9dffe --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-mdeberta_v3_base_caresa_es.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Castilian, Spanish mdeberta_v3_base_caresa DeBertaForSequenceClassification from IIC +author: John Snow Labs +name: mdeberta_v3_base_caresa +date: 2024-09-05 +tags: [es, open_source, onnx, sequence_classification, deberta] +task: Text Classification +language: es +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mdeberta_v3_base_caresa` is a Castilian, Spanish model originally trained by IIC. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mdeberta_v3_base_caresa_es_5.5.0_3.0_1725561277489.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mdeberta_v3_base_caresa_es_5.5.0_3.0_1725561277489.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DeBertaForSequenceClassification.pretrained("mdeberta_v3_base_caresa","es") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DeBertaForSequenceClassification.pretrained("mdeberta_v3_base_caresa", "es") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mdeberta_v3_base_caresa| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|es| +|Size:|794.5 MB| + +## References + +https://huggingface.co/IIC/mdeberta-v3-base-caresA \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-opus_maltese_korean_english_finetuned_english_tonga_tonga_islands_korean_obokkkk_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-05-opus_maltese_korean_english_finetuned_english_tonga_tonga_islands_korean_obokkkk_pipeline_en.md new file mode 100644 index 00000000000000..0b488cabf9a4d7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-opus_maltese_korean_english_finetuned_english_tonga_tonga_islands_korean_obokkkk_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_korean_english_finetuned_english_tonga_tonga_islands_korean_obokkkk_pipeline pipeline MarianTransformer from obokkkk +author: John Snow Labs +name: opus_maltese_korean_english_finetuned_english_tonga_tonga_islands_korean_obokkkk_pipeline +date: 2024-09-05 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_korean_english_finetuned_english_tonga_tonga_islands_korean_obokkkk_pipeline` is a English model originally trained by obokkkk. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_korean_english_finetuned_english_tonga_tonga_islands_korean_obokkkk_pipeline_en_5.5.0_3.0_1725494657615.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_korean_english_finetuned_english_tonga_tonga_islands_korean_obokkkk_pipeline_en_5.5.0_3.0_1725494657615.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_korean_english_finetuned_english_tonga_tonga_islands_korean_obokkkk_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_korean_english_finetuned_english_tonga_tonga_islands_korean_obokkkk_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_korean_english_finetuned_english_tonga_tonga_islands_korean_obokkkk_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|541.0 MB| + +## References + +https://huggingface.co/obokkkk/opus-mt-ko-en-finetuned-en-to-ko + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-output_ben_epstein_en.md b/docs/_posts/ahmedlone127/2024-09-05-output_ben_epstein_en.md new file mode 100644 index 00000000000000..fdf7fea6a94941 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-output_ben_epstein_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English output_ben_epstein DistilBertForTokenClassification from ben-epstein +author: John Snow Labs +name: output_ben_epstein +date: 2024-09-05 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`output_ben_epstein` is a English model originally trained by ben-epstein. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/output_ben_epstein_en_5.5.0_3.0_1725500382005.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/output_ben_epstein_en_5.5.0_3.0_1725500382005.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("output_ben_epstein","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("output_ben_epstein", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|output_ben_epstein| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|243.8 MB| + +## References + +https://huggingface.co/ben-epstein/output \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-paiute_tonga_tonga_islands_english_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-05-paiute_tonga_tonga_islands_english_pipeline_en.md new file mode 100644 index 00000000000000..a2f37513d51f2b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-paiute_tonga_tonga_islands_english_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English paiute_tonga_tonga_islands_english_pipeline pipeline MarianTransformer from jcole333 +author: John Snow Labs +name: paiute_tonga_tonga_islands_english_pipeline +date: 2024-09-05 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`paiute_tonga_tonga_islands_english_pipeline` is a English model originally trained by jcole333. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/paiute_tonga_tonga_islands_english_pipeline_en_5.5.0_3.0_1725544756382.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/paiute_tonga_tonga_islands_english_pipeline_en_5.5.0_3.0_1725544756382.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("paiute_tonga_tonga_islands_english_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("paiute_tonga_tonga_islands_english_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|paiute_tonga_tonga_islands_english_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|533.1 MB| + +## References + +https://huggingface.co/jcole333/paiute-to-en + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-qa_synth_data_with_unanswerable_23_aug_xlm_fnetune_1_0_en.md b/docs/_posts/ahmedlone127/2024-09-05-qa_synth_data_with_unanswerable_23_aug_xlm_fnetune_1_0_en.md new file mode 100644 index 00000000000000..8bf4d0eff21d04 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-qa_synth_data_with_unanswerable_23_aug_xlm_fnetune_1_0_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English qa_synth_data_with_unanswerable_23_aug_xlm_fnetune_1_0 XlmRoBertaForQuestionAnswering from am-infoweb +author: John Snow Labs +name: qa_synth_data_with_unanswerable_23_aug_xlm_fnetune_1_0 +date: 2024-09-05 +tags: [en, open_source, onnx, question_answering, xlm_roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`qa_synth_data_with_unanswerable_23_aug_xlm_fnetune_1_0` is a English model originally trained by am-infoweb. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/qa_synth_data_with_unanswerable_23_aug_xlm_fnetune_1_0_en_5.5.0_3.0_1725571224249.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/qa_synth_data_with_unanswerable_23_aug_xlm_fnetune_1_0_en_5.5.0_3.0_1725571224249.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = XlmRoBertaForQuestionAnswering.pretrained("qa_synth_data_with_unanswerable_23_aug_xlm_fnetune_1_0","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = XlmRoBertaForQuestionAnswering.pretrained("qa_synth_data_with_unanswerable_23_aug_xlm_fnetune_1_0", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|qa_synth_data_with_unanswerable_23_aug_xlm_fnetune_1_0| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|803.6 MB| + +## References + +https://huggingface.co/am-infoweb/QA_SYNTH_DATA_WITH_UNANSWERABLE_23_AUG_xlm_FNETUNE_1.0 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-roberta_base_1b_2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-05-roberta_base_1b_2_pipeline_en.md new file mode 100644 index 00000000000000..30df45491beb0e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-roberta_base_1b_2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_1b_2_pipeline pipeline RoBertaEmbeddings from nyu-mll +author: John Snow Labs +name: roberta_base_1b_2_pipeline +date: 2024-09-05 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_1b_2_pipeline` is a English model originally trained by nyu-mll. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_1b_2_pipeline_en_5.5.0_3.0_1725572103141.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_1b_2_pipeline_en_5.5.0_3.0_1725572103141.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_1b_2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_1b_2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_1b_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|296.2 MB| + +## References + +https://huggingface.co/nyu-mll/roberta-base-1B-2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-roberta_base_epoch_75_en.md b/docs/_posts/ahmedlone127/2024-09-05-roberta_base_epoch_75_en.md new file mode 100644 index 00000000000000..a83440e60441af --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-roberta_base_epoch_75_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_epoch_75 RoBertaEmbeddings from yanaiela +author: John Snow Labs +name: roberta_base_epoch_75 +date: 2024-09-05 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_epoch_75` is a English model originally trained by yanaiela. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_epoch_75_en_5.5.0_3.0_1725577515198.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_epoch_75_en_5.5.0_3.0_1725577515198.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("roberta_base_epoch_75","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("roberta_base_epoch_75","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_epoch_75| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|297.3 MB| + +## References + +https://huggingface.co/yanaiela/roberta-base-epoch_75 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-roberta_large_bne_telugu_pipeline_es.md b/docs/_posts/ahmedlone127/2024-09-05-roberta_large_bne_telugu_pipeline_es.md new file mode 100644 index 00000000000000..2df01a6c6fcde9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-roberta_large_bne_telugu_pipeline_es.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Castilian, Spanish roberta_large_bne_telugu_pipeline pipeline RoBertaForSequenceClassification from PlanTL-GOB-ES +author: John Snow Labs +name: roberta_large_bne_telugu_pipeline +date: 2024-09-05 +tags: [es, open_source, pipeline, onnx] +task: Text Classification +language: es +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_large_bne_telugu_pipeline` is a Castilian, Spanish model originally trained by PlanTL-GOB-ES. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_large_bne_telugu_pipeline_es_5.5.0_3.0_1725541951068.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_large_bne_telugu_pipeline_es_5.5.0_3.0_1725541951068.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_large_bne_telugu_pipeline", lang = "es") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_large_bne_telugu_pipeline", lang = "es") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_large_bne_telugu_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|es| +|Size:|1.3 GB| + +## References + +https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne-te + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-rupunct_big_ru.md b/docs/_posts/ahmedlone127/2024-09-05-rupunct_big_ru.md new file mode 100644 index 00000000000000..af81c4f511f457 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-rupunct_big_ru.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Russian rupunct_big BertForTokenClassification from RUPunct +author: John Snow Labs +name: rupunct_big +date: 2024-09-05 +tags: [ru, open_source, onnx, token_classification, bert, ner] +task: Named Entity Recognition +language: ru +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`rupunct_big` is a Russian model originally trained by RUPunct. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/rupunct_big_ru_5.5.0_3.0_1725539376134.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/rupunct_big_ru_5.5.0_3.0_1725539376134.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = BertForTokenClassification.pretrained("rupunct_big","ru") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = BertForTokenClassification.pretrained("rupunct_big", "ru") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|rupunct_big| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|ru| +|Size:|667.1 MB| + +## References + +https://huggingface.co/RUPunct/RUPunct_big \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-sent_simlm_base_msmarco_en.md b/docs/_posts/ahmedlone127/2024-09-05-sent_simlm_base_msmarco_en.md new file mode 100644 index 00000000000000..8484259cc36618 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-sent_simlm_base_msmarco_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sent_simlm_base_msmarco BertSentenceEmbeddings from intfloat +author: John Snow Labs +name: sent_simlm_base_msmarco +date: 2024-09-05 +tags: [en, open_source, onnx, sentence_embeddings, bert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertSentenceEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertSentenceEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sent_simlm_base_msmarco` is a English model originally trained by intfloat. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sent_simlm_base_msmarco_en_5.5.0_3.0_1725521374906.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sent_simlm_base_msmarco_en_5.5.0_3.0_1725521374906.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("sentence") + +embeddings = BertSentenceEmbeddings.pretrained("sent_simlm_base_msmarco","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = BertSentenceEmbeddings.pretrained("sent_simlm_base_msmarco","en") + .setInputCols(Array("sentence")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sent_simlm_base_msmarco| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentence]| +|Output Labels:|[embeddings]| +|Language:|en| +|Size:|407.2 MB| + +## References + +https://huggingface.co/intfloat/simlm-base-msmarco \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-stego_classifier_checkpoint_epoch_70_2024_07_26_16_03_28_en.md b/docs/_posts/ahmedlone127/2024-09-05-stego_classifier_checkpoint_epoch_70_2024_07_26_16_03_28_en.md new file mode 100644 index 00000000000000..395fc52a48b072 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-stego_classifier_checkpoint_epoch_70_2024_07_26_16_03_28_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English stego_classifier_checkpoint_epoch_70_2024_07_26_16_03_28 DistilBertForSequenceClassification from jvelja +author: John Snow Labs +name: stego_classifier_checkpoint_epoch_70_2024_07_26_16_03_28 +date: 2024-09-05 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`stego_classifier_checkpoint_epoch_70_2024_07_26_16_03_28` is a English model originally trained by jvelja. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/stego_classifier_checkpoint_epoch_70_2024_07_26_16_03_28_en_5.5.0_3.0_1725580111329.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/stego_classifier_checkpoint_epoch_70_2024_07_26_16_03_28_en_5.5.0_3.0_1725580111329.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("stego_classifier_checkpoint_epoch_70_2024_07_26_16_03_28","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("stego_classifier_checkpoint_epoch_70_2024_07_26_16_03_28", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|stego_classifier_checkpoint_epoch_70_2024_07_26_16_03_28| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/jvelja/stego-classifier-checkpoint-epoch-70-2024-07-26_16-03-28 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-test_airbus_year_report_en.md b/docs/_posts/ahmedlone127/2024-09-05-test_airbus_year_report_en.md new file mode 100644 index 00000000000000..143afe4f0d6aa7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-test_airbus_year_report_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English test_airbus_year_report DistilBertEmbeddings from Andi2022HH +author: John Snow Labs +name: test_airbus_year_report +date: 2024-09-05 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`test_airbus_year_report` is a English model originally trained by Andi2022HH. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/test_airbus_year_report_en_5.5.0_3.0_1725524602345.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/test_airbus_year_report_en_5.5.0_3.0_1725524602345.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("test_airbus_year_report","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("test_airbus_year_report","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|test_airbus_year_report| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|402.3 MB| + +## References + +https://huggingface.co/Andi2022HH/test_airbus_year_report \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-translation_model_hansollll_en.md b/docs/_posts/ahmedlone127/2024-09-05-translation_model_hansollll_en.md new file mode 100644 index 00000000000000..d6b1c077f9ea61 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-translation_model_hansollll_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English translation_model_hansollll MarianTransformer from Hansollll +author: John Snow Labs +name: translation_model_hansollll +date: 2024-09-05 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`translation_model_hansollll` is a English model originally trained by Hansollll. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/translation_model_hansollll_en_5.5.0_3.0_1725545630152.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/translation_model_hansollll_en_5.5.0_3.0_1725545630152.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("translation_model_hansollll","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("translation_model_hansollll","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|translation_model_hansollll| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|540.6 MB| + +## References + +https://huggingface.co/Hansollll/translation_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-tuning_lr_2e_05_wd_0_1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-05-tuning_lr_2e_05_wd_0_1_pipeline_en.md new file mode 100644 index 00000000000000..6627e03ab812eb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-tuning_lr_2e_05_wd_0_1_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English tuning_lr_2e_05_wd_0_1_pipeline pipeline DistilBertForSequenceClassification from ash-akjp-ga +author: John Snow Labs +name: tuning_lr_2e_05_wd_0_1_pipeline +date: 2024-09-05 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`tuning_lr_2e_05_wd_0_1_pipeline` is a English model originally trained by ash-akjp-ga. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/tuning_lr_2e_05_wd_0_1_pipeline_en_5.5.0_3.0_1725580395747.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/tuning_lr_2e_05_wd_0_1_pipeline_en_5.5.0_3.0_1725580395747.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("tuning_lr_2e_05_wd_0_1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("tuning_lr_2e_05_wd_0_1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|tuning_lr_2e_05_wd_0_1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/ash-akjp-ga/tuning_lr_2e-05_wd_0.1 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-wmdp_classifier_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-05-wmdp_classifier_pipeline_en.md new file mode 100644 index 00000000000000..7b33b833c3ce04 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-wmdp_classifier_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English wmdp_classifier_pipeline pipeline RoBertaForSequenceClassification from chrisliu298 +author: John Snow Labs +name: wmdp_classifier_pipeline +date: 2024-09-05 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`wmdp_classifier_pipeline` is a English model originally trained by chrisliu298. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/wmdp_classifier_pipeline_en_5.5.0_3.0_1725541552257.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/wmdp_classifier_pipeline_en_5.5.0_3.0_1725541552257.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("wmdp_classifier_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("wmdp_classifier_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|wmdp_classifier_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|454.0 MB| + +## References + +https://huggingface.co/chrisliu298/wmdp_classifier + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-xlm_pretrain_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-05-xlm_pretrain_pipeline_en.md new file mode 100644 index 00000000000000..49a2ff032f8c3c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-xlm_pretrain_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_pretrain_pipeline pipeline XlmRoBertaEmbeddings from hadifar +author: John Snow Labs +name: xlm_pretrain_pipeline +date: 2024-09-05 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_pretrain_pipeline` is a English model originally trained by hadifar. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_pretrain_pipeline_en_5.5.0_3.0_1725531718832.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_pretrain_pipeline_en_5.5.0_3.0_1725531718832.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_pretrain_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_pretrain_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_pretrain_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.0 GB| + +## References + +https://huggingface.co/hadifar/xlm_pretrain + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-xlm_roberta_base_squad2_finetuned_squad_vnktrmnb_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-05-xlm_roberta_base_squad2_finetuned_squad_vnktrmnb_pipeline_en.md new file mode 100644 index 00000000000000..c6618a305ce8da --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-xlm_roberta_base_squad2_finetuned_squad_vnktrmnb_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English xlm_roberta_base_squad2_finetuned_squad_vnktrmnb_pipeline pipeline XlmRoBertaForQuestionAnswering from vnktrmnb +author: John Snow Labs +name: xlm_roberta_base_squad2_finetuned_squad_vnktrmnb_pipeline +date: 2024-09-05 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_squad2_finetuned_squad_vnktrmnb_pipeline` is a English model originally trained by vnktrmnb. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_squad2_finetuned_squad_vnktrmnb_pipeline_en_5.5.0_3.0_1725567935930.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_squad2_finetuned_squad_vnktrmnb_pipeline_en_5.5.0_3.0_1725567935930.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_squad2_finetuned_squad_vnktrmnb_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_squad2_finetuned_squad_vnktrmnb_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_squad2_finetuned_squad_vnktrmnb_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|878.3 MB| + +## References + +https://huggingface.co/vnktrmnb/xlm-roberta-base-squad2-finetuned-squad + +## Included Models + +- MultiDocumentAssembler +- XlmRoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-xlm_roberta_qa_autonlp_more_fine_tune_24465520_26265908_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-05-xlm_roberta_qa_autonlp_more_fine_tune_24465520_26265908_pipeline_en.md new file mode 100644 index 00000000000000..b4a0ce10b39968 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-xlm_roberta_qa_autonlp_more_fine_tune_24465520_26265908_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English xlm_roberta_qa_autonlp_more_fine_tune_24465520_26265908_pipeline pipeline XlmRoBertaForQuestionAnswering from teacookies +author: John Snow Labs +name: xlm_roberta_qa_autonlp_more_fine_tune_24465520_26265908_pipeline +date: 2024-09-05 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_qa_autonlp_more_fine_tune_24465520_26265908_pipeline` is a English model originally trained by teacookies. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_qa_autonlp_more_fine_tune_24465520_26265908_pipeline_en_5.5.0_3.0_1725571127293.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_qa_autonlp_more_fine_tune_24465520_26265908_pipeline_en_5.5.0_3.0_1725571127293.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_qa_autonlp_more_fine_tune_24465520_26265908_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_qa_autonlp_more_fine_tune_24465520_26265908_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_qa_autonlp_more_fine_tune_24465520_26265908_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|888.2 MB| + +## References + +https://huggingface.co/teacookies/autonlp-more_fine_tune_24465520-26265908 + +## Included Models + +- MultiDocumentAssembler +- XlmRoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-05-xlm_roberta_qa_autonlp_roberta_base_squad2_24465519_en.md b/docs/_posts/ahmedlone127/2024-09-05-xlm_roberta_qa_autonlp_roberta_base_squad2_24465519_en.md new file mode 100644 index 00000000000000..4e0a374780be12 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-05-xlm_roberta_qa_autonlp_roberta_base_squad2_24465519_en.md @@ -0,0 +1,106 @@ +--- +layout: model +title: English XlmRoBertaForQuestionAnswering (from teacookies) +author: John Snow Labs +name: xlm_roberta_qa_autonlp_roberta_base_squad2_24465519 +date: 2024-09-05 +tags: [en, open_source, question_answering, xlmroberta, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained Question Answering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `autonlp-roberta-base-squad2-24465519` is a English model originally trained by `teacookies`. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_qa_autonlp_roberta_base_squad2_24465519_en_5.5.0_3.0_1725571433857.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_qa_autonlp_roberta_base_squad2_24465519_en_5.5.0_3.0_1725571433857.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +document_assembler = MultiDocumentAssembler() \ +.setInputCols(["question", "context"]) \ +.setOutputCols(["document_question", "document_context"]) + +spanClassifier = XlmRoBertaForQuestionAnswering.pretrained("xlm_roberta_qa_autonlp_roberta_base_squad2_24465519","en") \ +.setInputCols(["document_question", "document_context"]) \ +.setOutputCol("answer") \ +.setCaseSensitive(True) + +pipeline = Pipeline().setStages([ +document_assembler, +spanClassifier +]) + +example = spark.createDataFrame([["What's my name?", "My name is Clara and I live in Berkeley."]]).toDF("question", "context") + +result = pipeline.fit(example).transform(example) +``` +```scala +val document = new MultiDocumentAssembler() +.setInputCols(Array("question", "context")) +.setOutputCols(Array("document_question", "document_context")) + +val spanClassifier = XlmRoBertaForQuestionAnswering +.pretrained("xlm_roberta_qa_autonlp_roberta_base_squad2_24465519","en") +.setInputCols(Array("document_question", "document_context")) +.setOutputCol("answer") +.setCaseSensitive(true) +.setMaxSentenceLength(512) + +val pipeline = new Pipeline().setStages(Array(document, spanClassifier)) + +val example = Seq( +("Where was John Lenon born?", "John Lenon was born in London and lived in Paris. My name is Sarah and I live in London."), +("What's my name?", "My name is Clara and I live in Berkeley.")) +.toDF("question", "context") + +val result = pipeline.fit(example).transform(example) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("en.answer_question.squadv2.xlm_roberta.base_24465519.by_teacookies").predict("""What's my name?|||"My name is Clara and I live in Berkeley.""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_qa_autonlp_roberta_base_squad2_24465519| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|887.3 MB| + +## References + +References + +- https://huggingface.co/teacookies/autonlp-roberta-base-squad2-24465519 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-acarisbert_distilbert_en.md b/docs/_posts/ahmedlone127/2024-09-06-acarisbert_distilbert_en.md new file mode 100644 index 00000000000000..218149b3dbc893 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-acarisbert_distilbert_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English acarisbert_distilbert DistilBertForSequenceClassification from ongknsro +author: John Snow Labs +name: acarisbert_distilbert +date: 2024-09-06 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`acarisbert_distilbert` is a English model originally trained by ongknsro. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/acarisbert_distilbert_en_5.5.0_3.0_1725608442187.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/acarisbert_distilbert_en_5.5.0_3.0_1725608442187.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("acarisbert_distilbert","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("acarisbert_distilbert", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|acarisbert_distilbert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/ongknsro/ACARISBERT-DistilBERT \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-address_extraction_tr.md b/docs/_posts/ahmedlone127/2024-09-06-address_extraction_tr.md new file mode 100644 index 00000000000000..b641a4887c17dd --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-address_extraction_tr.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Turkish address_extraction BertForTokenClassification from nextgeo +author: John Snow Labs +name: address_extraction +date: 2024-09-06 +tags: [tr, open_source, onnx, token_classification, bert, ner] +task: Named Entity Recognition +language: tr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`address_extraction` is a Turkish model originally trained by nextgeo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/address_extraction_tr_5.5.0_3.0_1725600574897.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/address_extraction_tr_5.5.0_3.0_1725600574897.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = BertForTokenClassification.pretrained("address_extraction","tr") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = BertForTokenClassification.pretrained("address_extraction", "tr") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|address_extraction| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|tr| +|Size:|412.4 MB| + +## References + +https://huggingface.co/nextgeo/address-extraction \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-all_mpnet_base_v2__tweet_eval_emotion__classifier_en.md b/docs/_posts/ahmedlone127/2024-09-06-all_mpnet_base_v2__tweet_eval_emotion__classifier_en.md new file mode 100644 index 00000000000000..aeb6b7098756df --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-all_mpnet_base_v2__tweet_eval_emotion__classifier_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English all_mpnet_base_v2__tweet_eval_emotion__classifier MPNetForSequenceClassification from florentgbelidji +author: John Snow Labs +name: all_mpnet_base_v2__tweet_eval_emotion__classifier +date: 2024-09-06 +tags: [en, open_source, onnx, sequence_classification, mpnet] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_base_v2__tweet_eval_emotion__classifier` is a English model originally trained by florentgbelidji. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2__tweet_eval_emotion__classifier_en_5.5.0_3.0_1725655745861.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2__tweet_eval_emotion__classifier_en_5.5.0_3.0_1725655745861.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = MPNetForSequenceClassification.pretrained("all_mpnet_base_v2__tweet_eval_emotion__classifier","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = MPNetForSequenceClassification.pretrained("all_mpnet_base_v2__tweet_eval_emotion__classifier", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_base_v2__tweet_eval_emotion__classifier| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.3 MB| + +## References + +https://huggingface.co/florentgbelidji/all-mpnet-base-v2__tweet_eval_emotion__classifier \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-all_mpnet_base_v2_fine_tuned_epochs_8_binhcode25_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-all_mpnet_base_v2_fine_tuned_epochs_8_binhcode25_pipeline_en.md new file mode 100644 index 00000000000000..9aaad9511c1f1d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-all_mpnet_base_v2_fine_tuned_epochs_8_binhcode25_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English all_mpnet_base_v2_fine_tuned_epochs_8_binhcode25_pipeline pipeline MPNetEmbeddings from binhcode25 +author: John Snow Labs +name: all_mpnet_base_v2_fine_tuned_epochs_8_binhcode25_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_base_v2_fine_tuned_epochs_8_binhcode25_pipeline` is a English model originally trained by binhcode25. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_fine_tuned_epochs_8_binhcode25_pipeline_en_5.5.0_3.0_1725595467546.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_fine_tuned_epochs_8_binhcode25_pipeline_en_5.5.0_3.0_1725595467546.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("all_mpnet_base_v2_fine_tuned_epochs_8_binhcode25_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("all_mpnet_base_v2_fine_tuned_epochs_8_binhcode25_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_base_v2_fine_tuned_epochs_8_binhcode25_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/binhcode25/all-mpnet-base-v2-fine-tuned-epochs-8 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-bert_base_german_dbmdz_cased_de.md b/docs/_posts/ahmedlone127/2024-09-06-bert_base_german_dbmdz_cased_de.md new file mode 100644 index 00000000000000..eca19afb4c7218 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-bert_base_german_dbmdz_cased_de.md @@ -0,0 +1,92 @@ +--- +layout: model +title: German bert_base_german_dbmdz_cased BertEmbeddings from huggingface +author: John Snow Labs +name: bert_base_german_dbmdz_cased +date: 2024-09-06 +tags: [bert, de, open_source, fill_mask, onnx] +task: Embeddings +language: de +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_german_dbmdz_cased` is a German model originally trained by huggingface. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_german_dbmdz_cased_de_5.5.0_3.0_1725614665202.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_german_dbmdz_cased_de_5.5.0_3.0_1725614665202.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +document_assembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("documents") + + +embeddings =BertEmbeddings.pretrained("bert_base_german_dbmdz_cased","de") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([document_assembler, embeddings]) + +pipelineModel = pipeline.fit(data) + +pipelineDF = pipelineModel.transform(data) +``` +```scala +val document_assembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("embeddings") + +val embeddings = BertEmbeddings + .pretrained("bert_base_german_dbmdz_cased", "de") + .setInputCols(Array("documents","token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(document_assembler, embeddings)) + +val pipelineModel = pipeline.fit(data) + +val pipelineDF = pipelineModel.transform(data) +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_german_dbmdz_cased| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[bert]| +|Language:|de| +|Size:|409.9 MB| + +## References + +References + +https://huggingface.co/bert-base-german-dbmdz-cased \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-bert_base_turkish_ner_cased_tr.md b/docs/_posts/ahmedlone127/2024-09-06-bert_base_turkish_ner_cased_tr.md new file mode 100644 index 00000000000000..d799a549301d21 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-bert_base_turkish_ner_cased_tr.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Turkish bert_base_turkish_ner_cased BertForTokenClassification from girayyagmur +author: John Snow Labs +name: bert_base_turkish_ner_cased +date: 2024-09-06 +tags: [tr, open_source, onnx, token_classification, bert, ner] +task: Named Entity Recognition +language: tr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_turkish_ner_cased` is a Turkish model originally trained by girayyagmur. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_turkish_ner_cased_tr_5.5.0_3.0_1725663751997.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_turkish_ner_cased_tr_5.5.0_3.0_1725663751997.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = BertForTokenClassification.pretrained("bert_base_turkish_ner_cased","tr") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = BertForTokenClassification.pretrained("bert_base_turkish_ner_cased", "tr") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_turkish_ner_cased| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|tr| +|Size:|412.4 MB| + +## References + +https://huggingface.co/girayyagmur/bert-base-turkish-ner-cased \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-bert_base_uncased_contracts_en.md b/docs/_posts/ahmedlone127/2024-09-06-bert_base_uncased_contracts_en.md new file mode 100644 index 00000000000000..dbed3900fc3cbe --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-bert_base_uncased_contracts_en.md @@ -0,0 +1,98 @@ +--- +layout: model +title: English Legal Contracts BertEmbeddings model (Base, Uncased) +author: John Snow Labs +name: bert_base_uncased_contracts +date: 2024-09-06 +tags: [open_source, bert, embeddings, finance, contracts, en, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained Word Embeddings model, trained on legal contracts, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `bert-base-uncased-contracts` is a English model originally trained by `nlpaueb`. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_uncased_contracts_en_5.5.0_3.0_1725659779219.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_uncased_contracts_en_5.5.0_3.0_1725659779219.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + +{:.model-param} + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = BertEmbeddings.pretrained("bert_base_uncased_contracts","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline(stages=[documentAssembler, tokenizer, embeddings]) + +data = spark.createDataFrame([["I love Spark NLP."]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = BertEmbeddings.pretrained("bert_base_uncased_contracts","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) + +val data = Seq("I love Spark NLP.").toDF("text") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("en.embed.bert.contracts.uncased_base").predict("""I love Spark NLP.""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_uncased_contracts| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[bert]| +|Language:|en| +|Size:|407.1 MB| \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-bert_finetuned_ner_t3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-bert_finetuned_ner_t3_pipeline_en.md new file mode 100644 index 00000000000000..54f56304439658 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-bert_finetuned_ner_t3_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bert_finetuned_ner_t3_pipeline pipeline DistilBertForTokenClassification from avi10 +author: John Snow Labs +name: bert_finetuned_ner_t3_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_finetuned_ner_t3_pipeline` is a English model originally trained by avi10. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_finetuned_ner_t3_pipeline_en_5.5.0_3.0_1725654062702.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_finetuned_ner_t3_pipeline_en_5.5.0_3.0_1725654062702.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_finetuned_ner_t3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_finetuned_ner_t3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_finetuned_ner_t3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/avi10/bert-finetuned-ner-T3 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_model_gamino_en.md b/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_model_gamino_en.md new file mode 100644 index 00000000000000..12426fed2882d7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_model_gamino_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_awesome_model_gamino DistilBertForSequenceClassification from gamino +author: John Snow Labs +name: burmese_awesome_model_gamino +date: 2024-09-06 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_model_gamino` is a English model originally trained by gamino. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_model_gamino_en_5.5.0_3.0_1725608321223.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_model_gamino_en_5.5.0_3.0_1725608321223.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("burmese_awesome_model_gamino","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("burmese_awesome_model_gamino", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_model_gamino| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|250.0 MB| + +## References + +https://huggingface.co/gamino/my_awesome_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_qa_model_farfalla_en.md b/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_qa_model_farfalla_en.md new file mode 100644 index 00000000000000..f48d8ea70dffce --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_qa_model_farfalla_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_farfalla DistilBertForQuestionAnswering from farfalla +author: John Snow Labs +name: burmese_awesome_qa_model_farfalla +date: 2024-09-06 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_farfalla` is a English model originally trained by farfalla. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_farfalla_en_5.5.0_3.0_1725655067783.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_farfalla_en_5.5.0_3.0_1725655067783.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_farfalla","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_farfalla", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_farfalla| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/farfalla/my_awesome_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_qa_model_jennydqmm_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_qa_model_jennydqmm_pipeline_en.md new file mode 100644 index 00000000000000..292137e70e1cbc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_qa_model_jennydqmm_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_awesome_qa_model_jennydqmm_pipeline pipeline DistilBertForQuestionAnswering from JennyDQMM +author: John Snow Labs +name: burmese_awesome_qa_model_jennydqmm_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_jennydqmm_pipeline` is a English model originally trained by JennyDQMM. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_jennydqmm_pipeline_en_5.5.0_3.0_1725652320536.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_jennydqmm_pipeline_en_5.5.0_3.0_1725652320536.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_qa_model_jennydqmm_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_qa_model_jennydqmm_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_jennydqmm_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/JennyDQMM/my_awesome_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_qa_model_walter133_en.md b/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_qa_model_walter133_en.md new file mode 100644 index 00000000000000..bfc5b488a404da --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_qa_model_walter133_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_walter133 DistilBertForQuestionAnswering from walter133 +author: John Snow Labs +name: burmese_awesome_qa_model_walter133 +date: 2024-09-06 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_walter133` is a English model originally trained by walter133. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_walter133_en_5.5.0_3.0_1725621819651.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_walter133_en_5.5.0_3.0_1725621819651.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_walter133","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_walter133", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_walter133| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/walter133/my_awesome_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_wnut_model_studentmsd1_en.md b/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_wnut_model_studentmsd1_en.md new file mode 100644 index 00000000000000..331284d60fbc9e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_wnut_model_studentmsd1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_awesome_wnut_model_studentmsd1 DistilBertForTokenClassification from studentmsd1 +author: John Snow Labs +name: burmese_awesome_wnut_model_studentmsd1 +date: 2024-09-06 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_model_studentmsd1` is a English model originally trained by studentmsd1. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_studentmsd1_en_5.5.0_3.0_1725599660761.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_studentmsd1_en_5.5.0_3.0_1725599660761.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_wnut_model_studentmsd1","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_wnut_model_studentmsd1", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_model_studentmsd1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/studentmsd1/my_awesome_wnut_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_wnut_model_yannik_646_en.md b/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_wnut_model_yannik_646_en.md new file mode 100644 index 00000000000000..5d06aa3fc8b1fd --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-burmese_awesome_wnut_model_yannik_646_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_awesome_wnut_model_yannik_646 DistilBertForTokenClassification from yannik-646 +author: John Snow Labs +name: burmese_awesome_wnut_model_yannik_646 +date: 2024-09-06 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_model_yannik_646` is a English model originally trained by yannik-646. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_yannik_646_en_5.5.0_3.0_1725598974654.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_yannik_646_en_5.5.0_3.0_1725598974654.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_wnut_model_yannik_646","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_wnut_model_yannik_646", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_model_yannik_646| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/yannik-646/my_awesome_wnut_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-burmese_qa_model_parisaabbasi_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-burmese_qa_model_parisaabbasi_pipeline_en.md new file mode 100644 index 00000000000000..29ebc7ccb888b2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-burmese_qa_model_parisaabbasi_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_qa_model_parisaabbasi_pipeline pipeline DistilBertForQuestionAnswering from ParisaAbbasi +author: John Snow Labs +name: burmese_qa_model_parisaabbasi_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_qa_model_parisaabbasi_pipeline` is a English model originally trained by ParisaAbbasi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_qa_model_parisaabbasi_pipeline_en_5.5.0_3.0_1725652669853.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_qa_model_parisaabbasi_pipeline_en_5.5.0_3.0_1725652669853.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_qa_model_parisaabbasi_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_qa_model_parisaabbasi_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_qa_model_parisaabbasi_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/ParisaAbbasi/my_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-classifier__generated_data_only__uncertaintydetection_albert_en.md b/docs/_posts/ahmedlone127/2024-09-06-classifier__generated_data_only__uncertaintydetection_albert_en.md new file mode 100644 index 00000000000000..9e2a0dbf041482 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-classifier__generated_data_only__uncertaintydetection_albert_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English classifier__generated_data_only__uncertaintydetection_albert AlbertForSequenceClassification from yevhenkost +author: John Snow Labs +name: classifier__generated_data_only__uncertaintydetection_albert +date: 2024-09-06 +tags: [en, open_source, onnx, sequence_classification, albert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: AlbertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained AlbertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`classifier__generated_data_only__uncertaintydetection_albert` is a English model originally trained by yevhenkost. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/classifier__generated_data_only__uncertaintydetection_albert_en_5.5.0_3.0_1725662081837.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/classifier__generated_data_only__uncertaintydetection_albert_en_5.5.0_3.0_1725662081837.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = AlbertForSequenceClassification.pretrained("classifier__generated_data_only__uncertaintydetection_albert","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = AlbertForSequenceClassification.pretrained("classifier__generated_data_only__uncertaintydetection_albert", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|classifier__generated_data_only__uncertaintydetection_albert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|44.2 MB| + +## References + +https://huggingface.co/yevhenkost/classifier__generated_data_only__uncertaintydetection_albert \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-clip_8_model_en.md b/docs/_posts/ahmedlone127/2024-09-06-clip_8_model_en.md new file mode 100644 index 00000000000000..0481a00dcd54ca --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-clip_8_model_en.md @@ -0,0 +1,120 @@ +--- +layout: model +title: English clip_8_model CLIPForZeroShotClassification from shaunster +author: John Snow Labs +name: clip_8_model +date: 2024-09-06 +tags: [en, open_source, onnx, zero_shot, clip, image] +task: Zero-Shot Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CLIPForZeroShotClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CLIPForZeroShotClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`clip_8_model` is a English model originally trained by shaunster. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/clip_8_model_en_5.5.0_3.0_1725649949708.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/clip_8_model_en_5.5.0_3.0_1725649949708.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +imageDF = spark.read \ + .format("image") \ + .option("dropInvalid", value = True) \ + .load("src/test/resources/image/") + +candidateLabels = [ + "a photo of a bird", + "a photo of a cat", + "a photo of a dog", + "a photo of a hen", + "a photo of a hippo", + "a photo of a room", + "a photo of a tractor", + "a photo of an ostrich", + "a photo of an ox"] + +ImageAssembler = ImageAssembler() \ + .setInputCol("image") \ + .setOutputCol("image_assembler") + +imageClassifier = CLIPForZeroShotClassification.pretrained("clip_8_model","en") \ + .setInputCols(["image_assembler"]) \ + .setOutputCol("label") \ + .setCandidateLabels(candidateLabels) + +pipeline = Pipeline().setStages([ImageAssembler, imageClassifier]) +pipelineModel = pipeline.fit(imageDF) +pipelineDF = pipelineModel.transform(imageDF) + + +``` +```scala + + +val imageDF = ResourceHelper.spark.read + .format("image") + .option("dropInvalid", value = true) + .load("src/test/resources/image/") + +val candidateLabels = Array( + "a photo of a bird", + "a photo of a cat", + "a photo of a dog", + "a photo of a hen", + "a photo of a hippo", + "a photo of a room", + "a photo of a tractor", + "a photo of an ostrich", + "a photo of an ox") + +val imageAssembler = new ImageAssembler() + .setInputCol("image") + .setOutputCol("image_assembler") + +val imageClassifier = CLIPForZeroShotClassification.pretrained("clip_8_model","en") \ + .setInputCols(Array("image_assembler")) \ + .setOutputCol("label") \ + .setCandidateLabels(candidateLabels) + +val pipeline = new Pipeline().setStages(Array(imageAssembler, imageClassifier)) +val pipelineModel = pipeline.fit(imageDF) +val pipelineDF = pipelineModel.transform(imageDF) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|clip_8_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[image_assembler]| +|Output Labels:|[label]| +|Language:|en| +|Size:|567.5 MB| + +## References + +https://huggingface.co/shaunster/clip_8_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-clip_base_10240_checkpoint350_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-clip_base_10240_checkpoint350_pipeline_en.md new file mode 100644 index 00000000000000..f3319069b49477 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-clip_base_10240_checkpoint350_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English clip_base_10240_checkpoint350_pipeline pipeline CLIPForZeroShotClassification from gowitheflowlab +author: John Snow Labs +name: clip_base_10240_checkpoint350_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Zero-Shot Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CLIPForZeroShotClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`clip_base_10240_checkpoint350_pipeline` is a English model originally trained by gowitheflowlab. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/clip_base_10240_checkpoint350_pipeline_en_5.5.0_3.0_1725650017253.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/clip_base_10240_checkpoint350_pipeline_en_5.5.0_3.0_1725650017253.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("clip_base_10240_checkpoint350_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("clip_base_10240_checkpoint350_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|clip_base_10240_checkpoint350_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|509.8 MB| + +## References + +https://huggingface.co/gowitheflowlab/clip-base-10240-checkpoint350 + +## Included Models + +- ImageAssembler +- CLIPForZeroShotClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-clip_gagan3012_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-clip_gagan3012_pipeline_en.md new file mode 100644 index 00000000000000..14410cc7a7e2c5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-clip_gagan3012_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English clip_gagan3012_pipeline pipeline CLIPForZeroShotClassification from gagan3012 +author: John Snow Labs +name: clip_gagan3012_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Zero-Shot Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CLIPForZeroShotClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`clip_gagan3012_pipeline` is a English model originally trained by gagan3012. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/clip_gagan3012_pipeline_en_5.5.0_3.0_1725650132392.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/clip_gagan3012_pipeline_en_5.5.0_3.0_1725650132392.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("clip_gagan3012_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("clip_gagan3012_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|clip_gagan3012_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|397.5 MB| + +## References + +https://huggingface.co/gagan3012/clip + +## Included Models + +- ImageAssembler +- CLIPForZeroShotClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-cuad_distil_parties_dates_law_08_18_indonesian_question1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-cuad_distil_parties_dates_law_08_18_indonesian_question1_pipeline_en.md new file mode 100644 index 00000000000000..73328949a2fa4a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-cuad_distil_parties_dates_law_08_18_indonesian_question1_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English cuad_distil_parties_dates_law_08_18_indonesian_question1_pipeline pipeline DistilBertForQuestionAnswering from saraks +author: John Snow Labs +name: cuad_distil_parties_dates_law_08_18_indonesian_question1_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cuad_distil_parties_dates_law_08_18_indonesian_question1_pipeline` is a English model originally trained by saraks. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cuad_distil_parties_dates_law_08_18_indonesian_question1_pipeline_en_5.5.0_3.0_1725654758952.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cuad_distil_parties_dates_law_08_18_indonesian_question1_pipeline_en_5.5.0_3.0_1725654758952.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("cuad_distil_parties_dates_law_08_18_indonesian_question1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("cuad_distil_parties_dates_law_08_18_indonesian_question1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cuad_distil_parties_dates_law_08_18_indonesian_question1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/saraks/cuad-distil-parties-dates-law-08-18-id-question1 + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-danish_distilbert_en.md b/docs/_posts/ahmedlone127/2024-09-06-danish_distilbert_en.md new file mode 100644 index 00000000000000..45ae0b9d9bcc5c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-danish_distilbert_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English danish_distilbert DistilBertEmbeddings from gc394 +author: John Snow Labs +name: danish_distilbert +date: 2024-09-06 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`danish_distilbert` is a English model originally trained by gc394. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/danish_distilbert_en_5.5.0_3.0_1725639386091.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/danish_distilbert_en_5.5.0_3.0_1725639386091.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("danish_distilbert","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("danish_distilbert","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|danish_distilbert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/gc394/da_distilbert \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-delivery_balaned_distilbert_base_uncased_v3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-delivery_balaned_distilbert_base_uncased_v3_pipeline_en.md new file mode 100644 index 00000000000000..be32d91ca99fc1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-delivery_balaned_distilbert_base_uncased_v3_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English delivery_balaned_distilbert_base_uncased_v3_pipeline pipeline DistilBertForSequenceClassification from chuuhtetnaing +author: John Snow Labs +name: delivery_balaned_distilbert_base_uncased_v3_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`delivery_balaned_distilbert_base_uncased_v3_pipeline` is a English model originally trained by chuuhtetnaing. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/delivery_balaned_distilbert_base_uncased_v3_pipeline_en_5.5.0_3.0_1725608157967.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/delivery_balaned_distilbert_base_uncased_v3_pipeline_en_5.5.0_3.0_1725608157967.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("delivery_balaned_distilbert_base_uncased_v3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("delivery_balaned_distilbert_base_uncased_v3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|delivery_balaned_distilbert_base_uncased_v3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/chuuhtetnaing/delivery-balaned-distilbert-base-uncased-v3 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-distibert_ner_en.md b/docs/_posts/ahmedlone127/2024-09-06-distibert_ner_en.md new file mode 100644 index 00000000000000..45c8bee378fa18 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-distibert_ner_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distibert_ner DistilBertForTokenClassification from satyamrajawat1994 +author: John Snow Labs +name: distibert_ner +date: 2024-09-06 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distibert_ner` is a English model originally trained by satyamrajawat1994. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distibert_ner_en_5.5.0_3.0_1725599503152.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distibert_ner_en_5.5.0_3.0_1725599503152.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("distibert_ner","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("distibert_ner", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distibert_ner| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/satyamrajawat1994/distibert-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_german_cased_de.md b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_german_cased_de.md new file mode 100644 index 00000000000000..15f3fa0d829440 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_german_cased_de.md @@ -0,0 +1,92 @@ +--- +layout: model +title: German distilbert_base_german_cased DistilBertEmbeddings from huggingface +author: John Snow Labs +name: distilbert_base_german_cased +date: 2024-09-06 +tags: [distilbert, de, open_source, fill_mask, onnx] +task: Embeddings +language: de +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_german_cased` is a German model originally trained by huggingface. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_german_cased_de_5.5.0_3.0_1725639529730.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_german_cased_de_5.5.0_3.0_1725639529730.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +document_assembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("documents") + + +embeddings =DistilBertEmbeddings.pretrained("distilbert_base_german_cased","de") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([document_assembler, embeddings]) + +pipelineModel = pipeline.fit(data) + +pipelineDF = pipelineModel.transform(data) +``` +```scala +val document_assembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("embeddings") + +val embeddings = DistilBertEmbeddings + .pretrained("distilbert_base_german_cased", "de") + .setInputCols(Array("documents","token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(document_assembler, embeddings)) + +val pipelineModel = pipeline.fit(data) + +val pipelineDF = pipelineModel.transform(data) +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_german_cased| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|de| +|Size:|250.3 MB| + +## References + +References + +https://huggingface.co/distilbert-base-german-cased \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_galeng_en.md b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_galeng_en.md new file mode 100644 index 00000000000000..dbf8b042068a52 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_galeng_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_galeng DistilBertEmbeddings from galeng +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_galeng +date: 2024-09-06 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_galeng` is a English model originally trained by galeng. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_galeng_en_5.5.0_3.0_1725664725681.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_galeng_en_5.5.0_3.0_1725664725681.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_galeng","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_galeng","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_galeng| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/galeng/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_kiwihead15_en.md b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_kiwihead15_en.md new file mode 100644 index 00000000000000..ab1d7c35719956 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_kiwihead15_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_kiwihead15 DistilBertEmbeddings from Kiwihead15 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_kiwihead15 +date: 2024-09-06 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_kiwihead15` is a English model originally trained by Kiwihead15. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_kiwihead15_en_5.5.0_3.0_1725665194886.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_kiwihead15_en_5.5.0_3.0_1725665194886.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_kiwihead15","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_kiwihead15","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_kiwihead15| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Kiwihead15/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_mireya25_en.md b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_mireya25_en.md new file mode 100644 index 00000000000000..e74429ae5a96f9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_mireya25_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_mireya25 DistilBertEmbeddings from Mireya25 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_mireya25 +date: 2024-09-06 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_mireya25` is a English model originally trained by Mireya25. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_mireya25_en_5.5.0_3.0_1725665197518.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_mireya25_en_5.5.0_3.0_1725665197518.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_mireya25","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_mireya25","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_mireya25| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Mireya25/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_nourhanabosaeed_en.md b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_nourhanabosaeed_en.md new file mode 100644 index 00000000000000..49a369c0a436a0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_nourhanabosaeed_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_nourhanabosaeed DistilBertEmbeddings from NourhanAbosaeed +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_nourhanabosaeed +date: 2024-09-06 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_nourhanabosaeed` is a English model originally trained by NourhanAbosaeed. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_nourhanabosaeed_en_5.5.0_3.0_1725664771331.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_nourhanabosaeed_en_5.5.0_3.0_1725664771331.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_nourhanabosaeed","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_nourhanabosaeed","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_nourhanabosaeed| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/NourhanAbosaeed/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_rikrim_en.md b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_rikrim_en.md new file mode 100644 index 00000000000000..74e5b152f2ffba --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_imdb_rikrim_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_rikrim DistilBertEmbeddings from RiKrim +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_rikrim +date: 2024-09-06 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_rikrim` is a English model originally trained by RiKrim. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_rikrim_en_5.5.0_3.0_1725639286279.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_rikrim_en_5.5.0_3.0_1725639286279.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_rikrim","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_rikrim","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_rikrim| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/RiKrim/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_squad_ajayrath_en.md b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_squad_ajayrath_en.md new file mode 100644 index 00000000000000..85fdb1e57578f9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_squad_ajayrath_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_ajayrath DistilBertForQuestionAnswering from ajayrath +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_ajayrath +date: 2024-09-06 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_ajayrath` is a English model originally trained by ajayrath. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_ajayrath_en_5.5.0_3.0_1725654472034.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_ajayrath_en_5.5.0_3.0_1725654472034.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_ajayrath","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_ajayrath", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_ajayrath| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/ajayrath/distilbert-base-uncased-finetuned-squad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_squad_ajayrath_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_squad_ajayrath_pipeline_en.md new file mode 100644 index 00000000000000..1072e4f4e6e1b7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_squad_ajayrath_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_ajayrath_pipeline pipeline DistilBertForQuestionAnswering from ajayrath +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_ajayrath_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_ajayrath_pipeline` is a English model originally trained by ajayrath. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_ajayrath_pipeline_en_5.5.0_3.0_1725654485124.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_ajayrath_pipeline_en_5.5.0_3.0_1725654485124.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_squad_ajayrath_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_squad_ajayrath_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_ajayrath_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/ajayrath/distilbert-base-uncased-finetuned-squad + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_squad_d5716d28_ahmed97_en.md b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_squad_d5716d28_ahmed97_en.md new file mode 100644 index 00000000000000..d3919acb4f13f8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_squad_d5716d28_ahmed97_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_d5716d28_ahmed97 DistilBertForQuestionAnswering from Ahmed97 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_d5716d28_ahmed97 +date: 2024-09-06 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_d5716d28_ahmed97` is a English model originally trained by Ahmed97. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_d5716d28_ahmed97_en_5.5.0_3.0_1725652670662.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_d5716d28_ahmed97_en_5.5.0_3.0_1725652670662.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_d5716d28_ahmed97","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_d5716d28_ahmed97", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_d5716d28_ahmed97| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Ahmed97/distilbert-base-uncased-finetuned-squad-d5716d28 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_squad_d5716d28_miesnerjacob_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_squad_d5716d28_miesnerjacob_pipeline_en.md new file mode 100644 index 00000000000000..5e6166a1dfa03e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-distilbert_base_uncased_finetuned_squad_d5716d28_miesnerjacob_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_d5716d28_miesnerjacob_pipeline pipeline DistilBertForQuestionAnswering from miesnerjacob +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_d5716d28_miesnerjacob_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_d5716d28_miesnerjacob_pipeline` is a English model originally trained by miesnerjacob. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_d5716d28_miesnerjacob_pipeline_en_5.5.0_3.0_1725652375322.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_d5716d28_miesnerjacob_pipeline_en_5.5.0_3.0_1725652375322.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_squad_d5716d28_miesnerjacob_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_squad_d5716d28_miesnerjacob_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_d5716d28_miesnerjacob_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/miesnerjacob/distilbert-base-uncased-finetuned-squad-d5716d28 + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-distilbert_finetuned_ai4privacy_v2_en.md b/docs/_posts/ahmedlone127/2024-09-06-distilbert_finetuned_ai4privacy_v2_en.md new file mode 100644 index 00000000000000..4e56083278ae99 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-distilbert_finetuned_ai4privacy_v2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_finetuned_ai4privacy_v2 DistilBertForTokenClassification from Isotonic +author: John Snow Labs +name: distilbert_finetuned_ai4privacy_v2 +date: 2024-09-06 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_finetuned_ai4privacy_v2` is a English model originally trained by Isotonic. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_finetuned_ai4privacy_v2_en_5.5.0_3.0_1725599257071.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_finetuned_ai4privacy_v2_en_5.5.0_3.0_1725599257071.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_finetuned_ai4privacy_v2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_finetuned_ai4privacy_v2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_finetuned_ai4privacy_v2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.6 MB| + +## References + +https://huggingface.co/Isotonic/distilbert_finetuned_ai4privacy_v2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-distilbert_qa_english_german_vietnamese_chinese_spanish_model_pipeline_xx.md b/docs/_posts/ahmedlone127/2024-09-06-distilbert_qa_english_german_vietnamese_chinese_spanish_model_pipeline_xx.md new file mode 100644 index 00000000000000..f102577d5aa70c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-distilbert_qa_english_german_vietnamese_chinese_spanish_model_pipeline_xx.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Multilingual distilbert_qa_english_german_vietnamese_chinese_spanish_model_pipeline pipeline DistilBertForQuestionAnswering from ZYW +author: John Snow Labs +name: distilbert_qa_english_german_vietnamese_chinese_spanish_model_pipeline +date: 2024-09-06 +tags: [xx, open_source, pipeline, onnx] +task: Question Answering +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_qa_english_german_vietnamese_chinese_spanish_model_pipeline` is a Multilingual model originally trained by ZYW. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_qa_english_german_vietnamese_chinese_spanish_model_pipeline_xx_5.5.0_3.0_1725652876888.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_qa_english_german_vietnamese_chinese_spanish_model_pipeline_xx_5.5.0_3.0_1725652876888.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_qa_english_german_vietnamese_chinese_spanish_model_pipeline", lang = "xx") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_qa_english_german_vietnamese_chinese_spanish_model_pipeline", lang = "xx") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_qa_english_german_vietnamese_chinese_spanish_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|xx| +|Size:|505.4 MB| + +## References + +https://huggingface.co/ZYW/en-de-vi-zh-es-model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-distilbert_qa_squad_english_german_spanish_vietnamese_chinese_model_xx.md b/docs/_posts/ahmedlone127/2024-09-06-distilbert_qa_squad_english_german_spanish_vietnamese_chinese_model_xx.md new file mode 100644 index 00000000000000..c30dafd6f616a6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-distilbert_qa_squad_english_german_spanish_vietnamese_chinese_model_xx.md @@ -0,0 +1,86 @@ +--- +layout: model +title: Multilingual distilbert_qa_squad_english_german_spanish_vietnamese_chinese_model DistilBertForQuestionAnswering from ZYW +author: John Snow Labs +name: distilbert_qa_squad_english_german_spanish_vietnamese_chinese_model +date: 2024-09-06 +tags: [xx, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_qa_squad_english_german_spanish_vietnamese_chinese_model` is a Multilingual model originally trained by ZYW. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_qa_squad_english_german_spanish_vietnamese_chinese_model_xx_5.5.0_3.0_1725652445376.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_qa_squad_english_german_spanish_vietnamese_chinese_model_xx_5.5.0_3.0_1725652445376.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_qa_squad_english_german_spanish_vietnamese_chinese_model","xx") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_qa_squad_english_german_spanish_vietnamese_chinese_model", "xx") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_qa_squad_english_german_spanish_vietnamese_chinese_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|xx| +|Size:|505.4 MB| + +## References + +https://huggingface.co/ZYW/squad-en-de-es-vi-zh-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-dummy_model_joyraj_en.md b/docs/_posts/ahmedlone127/2024-09-06-dummy_model_joyraj_en.md new file mode 100644 index 00000000000000..edb1f7282e1d5b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-dummy_model_joyraj_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_joyraj CamemBertEmbeddings from Joyraj +author: John Snow Labs +name: dummy_model_joyraj +date: 2024-09-06 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_joyraj` is a English model originally trained by Joyraj. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_joyraj_en_5.5.0_3.0_1725632500187.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_joyraj_en_5.5.0_3.0_1725632500187.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_joyraj","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_joyraj","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_joyraj| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/Joyraj/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-english_german_translation_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-english_german_translation_pipeline_en.md new file mode 100644 index 00000000000000..211027b17d6945 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-english_german_translation_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English english_german_translation_pipeline pipeline MarianTransformer from alina1997 +author: John Snow Labs +name: english_german_translation_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`english_german_translation_pipeline` is a English model originally trained by alina1997. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/english_german_translation_pipeline_en_5.5.0_3.0_1725636484615.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/english_german_translation_pipeline_en_5.5.0_3.0_1725636484615.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("english_german_translation_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("english_german_translation_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|english_german_translation_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|500.0 MB| + +## References + +https://huggingface.co/alina1997/en_de_translation + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-extractive_question_answering_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-extractive_question_answering_pipeline_en.md new file mode 100644 index 00000000000000..72d3373f267a7b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-extractive_question_answering_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English extractive_question_answering_pipeline pipeline DistilBertForQuestionAnswering from autoevaluate +author: John Snow Labs +name: extractive_question_answering_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`extractive_question_answering_pipeline` is a English model originally trained by autoevaluate. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/extractive_question_answering_pipeline_en_5.5.0_3.0_1725622126545.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/extractive_question_answering_pipeline_en_5.5.0_3.0_1725622126545.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("extractive_question_answering_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("extractive_question_answering_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|extractive_question_answering_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/autoevaluate/extractive-question-answering + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-filtered_lr_2e_5_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-filtered_lr_2e_5_pipeline_en.md new file mode 100644 index 00000000000000..95f484cad45f47 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-filtered_lr_2e_5_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English filtered_lr_2e_5_pipeline pipeline DistilBertForTokenClassification from Gkumi +author: John Snow Labs +name: filtered_lr_2e_5_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`filtered_lr_2e_5_pipeline` is a English model originally trained by Gkumi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/filtered_lr_2e_5_pipeline_en_5.5.0_3.0_1725653303863.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/filtered_lr_2e_5_pipeline_en_5.5.0_3.0_1725653303863.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("filtered_lr_2e_5_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("filtered_lr_2e_5_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|filtered_lr_2e_5_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|243.8 MB| + +## References + +https://huggingface.co/Gkumi/filtered-lr-2e-5 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-finance_bearish_bullish_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-finance_bearish_bullish_pipeline_en.md new file mode 100644 index 00000000000000..fac78a90a65edb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-finance_bearish_bullish_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English finance_bearish_bullish_pipeline pipeline DistilBertForSequenceClassification from ldh243 +author: John Snow Labs +name: finance_bearish_bullish_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finance_bearish_bullish_pipeline` is a English model originally trained by ldh243. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finance_bearish_bullish_pipeline_en_5.5.0_3.0_1725607964230.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finance_bearish_bullish_pipeline_en_5.5.0_3.0_1725607964230.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finance_bearish_bullish_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finance_bearish_bullish_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finance_bearish_bullish_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/ldh243/finance-bearish-bullish + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-finetuned_qa_model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-finetuned_qa_model_pipeline_en.md new file mode 100644 index 00000000000000..5b02a80624a426 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-finetuned_qa_model_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English finetuned_qa_model_pipeline pipeline DistilBertForQuestionAnswering from yileitu +author: John Snow Labs +name: finetuned_qa_model_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuned_qa_model_pipeline` is a English model originally trained by yileitu. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuned_qa_model_pipeline_en_5.5.0_3.0_1725621501707.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuned_qa_model_pipeline_en_5.5.0_3.0_1725621501707.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finetuned_qa_model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finetuned_qa_model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuned_qa_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/yileitu/finetuned_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-indo_aryan_xlm_r_base_gu.md b/docs/_posts/ahmedlone127/2024-09-06-indo_aryan_xlm_r_base_gu.md new file mode 100644 index 00000000000000..a1e7f3b7721a5c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-indo_aryan_xlm_r_base_gu.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Gujarati indo_aryan_xlm_r_base XlmRoBertaEmbeddings from ashwani-tanwar +author: John Snow Labs +name: indo_aryan_xlm_r_base +date: 2024-09-06 +tags: [gu, open_source, onnx, embeddings, xlm_roberta] +task: Embeddings +language: gu +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`indo_aryan_xlm_r_base` is a Gujarati model originally trained by ashwani-tanwar. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/indo_aryan_xlm_r_base_gu_5.5.0_3.0_1725626772214.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/indo_aryan_xlm_r_base_gu_5.5.0_3.0_1725626772214.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = XlmRoBertaEmbeddings.pretrained("indo_aryan_xlm_r_base","gu") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = XlmRoBertaEmbeddings.pretrained("indo_aryan_xlm_r_base","gu") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|indo_aryan_xlm_r_base| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[xlm_roberta]| +|Language:|gu| +|Size:|651.9 MB| + +## References + +https://huggingface.co/ashwani-tanwar/Indo-Aryan-XLM-R-Base \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-italian_legal_bert_pipeline_it.md b/docs/_posts/ahmedlone127/2024-09-06-italian_legal_bert_pipeline_it.md new file mode 100644 index 00000000000000..849b1aaf5608ef --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-italian_legal_bert_pipeline_it.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Italian italian_legal_bert_pipeline pipeline BertEmbeddings from dlicari +author: John Snow Labs +name: italian_legal_bert_pipeline +date: 2024-09-06 +tags: [it, open_source, pipeline, onnx] +task: Embeddings +language: it +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`italian_legal_bert_pipeline` is a Italian model originally trained by dlicari. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/italian_legal_bert_pipeline_it_5.5.0_3.0_1725614952150.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/italian_legal_bert_pipeline_it_5.5.0_3.0_1725614952150.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("italian_legal_bert_pipeline", lang = "it") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("italian_legal_bert_pipeline", lang = "it") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|italian_legal_bert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|it| +|Size:|408.9 MB| + +## References + +https://huggingface.co/dlicari/Italian-Legal-BERT + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-marathi_marh_val_g_pipeline_mr.md b/docs/_posts/ahmedlone127/2024-09-06-marathi_marh_val_g_pipeline_mr.md new file mode 100644 index 00000000000000..fb21dad9f487a2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-marathi_marh_val_g_pipeline_mr.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Marathi marathi_marh_val_g_pipeline pipeline WhisperForCTC from simran14 +author: John Snow Labs +name: marathi_marh_val_g_pipeline +date: 2024-09-06 +tags: [mr, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: mr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marathi_marh_val_g_pipeline` is a Marathi model originally trained by simran14. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marathi_marh_val_g_pipeline_mr_5.5.0_3.0_1725647534801.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marathi_marh_val_g_pipeline_mr_5.5.0_3.0_1725647534801.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("marathi_marh_val_g_pipeline", lang = "mr") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("marathi_marh_val_g_pipeline", lang = "mr") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marathi_marh_val_g_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|mr| +|Size:|1.7 GB| + +## References + +https://huggingface.co/simran14/mr-val-g + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_danivicen_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_danivicen_pipeline_en.md new file mode 100644 index 00000000000000..23b2ad3e043dba --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_danivicen_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_danivicen_pipeline pipeline MarianTransformer from danivicen +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_danivicen_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_danivicen_pipeline` is a English model originally trained by danivicen. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_danivicen_pipeline_en_5.5.0_3.0_1725636156423.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_danivicen_pipeline_en_5.5.0_3.0_1725636156423.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_danivicen_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_danivicen_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_danivicen_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|508.7 MB| + +## References + +https://huggingface.co/danivicen/marian-finetuned-kde4-en-to-fr-accelerate + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-marian_finetuned_kde4_english_tonga_tonga_islands_french_indah1_en.md b/docs/_posts/ahmedlone127/2024-09-06-marian_finetuned_kde4_english_tonga_tonga_islands_french_indah1_en.md new file mode 100644 index 00000000000000..61ccedbe542862 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-marian_finetuned_kde4_english_tonga_tonga_islands_french_indah1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_french_indah1 MarianTransformer from Indah1 +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_french_indah1 +date: 2024-09-06 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_french_indah1` is a English model originally trained by Indah1. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_indah1_en_5.5.0_3.0_1725634958522.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_indah1_en_5.5.0_3.0_1725634958522.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_french_indah1","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_french_indah1","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_french_indah1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.2 MB| + +## References + +https://huggingface.co/Indah1/marian-finetuned-kde4-en-to-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-marianmix_english_chinese_10_deskdown_en.md b/docs/_posts/ahmedlone127/2024-09-06-marianmix_english_chinese_10_deskdown_en.md new file mode 100644 index 00000000000000..9023bfe5464645 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-marianmix_english_chinese_10_deskdown_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English marianmix_english_chinese_10_deskdown MarianTransformer from DeskDown +author: John Snow Labs +name: marianmix_english_chinese_10_deskdown +date: 2024-09-06 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marianmix_english_chinese_10_deskdown` is a English model originally trained by DeskDown. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marianmix_english_chinese_10_deskdown_en_5.5.0_3.0_1725635010988.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marianmix_english_chinese_10_deskdown_en_5.5.0_3.0_1725635010988.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("marianmix_english_chinese_10_deskdown","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("marianmix_english_chinese_10_deskdown","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marianmix_english_chinese_10_deskdown| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|751.1 MB| + +## References + +https://huggingface.co/DeskDown/MarianMix_en-zh-10 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-marianmt_finetuned_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-marianmt_finetuned_pipeline_en.md new file mode 100644 index 00000000000000..f83afa5471a79a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-marianmt_finetuned_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English marianmt_finetuned_pipeline pipeline MarianTransformer from SFZheng7 +author: John Snow Labs +name: marianmt_finetuned_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marianmt_finetuned_pipeline` is a English model originally trained by SFZheng7. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marianmt_finetuned_pipeline_en_5.5.0_3.0_1725636342867.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marianmt_finetuned_pipeline_en_5.5.0_3.0_1725636342867.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("marianmt_finetuned_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("marianmt_finetuned_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marianmt_finetuned_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|541.5 MB| + +## References + +https://huggingface.co/SFZheng7/MarianMT-Finetuned + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-mdeberta_v3_base_mqnli_en.md b/docs/_posts/ahmedlone127/2024-09-06-mdeberta_v3_base_mqnli_en.md new file mode 100644 index 00000000000000..05ee1fcc3b0510 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-mdeberta_v3_base_mqnli_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English mdeberta_v3_base_mqnli DeBertaForSequenceClassification from SachinPatel248 +author: John Snow Labs +name: mdeberta_v3_base_mqnli +date: 2024-09-06 +tags: [en, open_source, onnx, sequence_classification, deberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mdeberta_v3_base_mqnli` is a English model originally trained by SachinPatel248. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mdeberta_v3_base_mqnli_en_5.5.0_3.0_1725590322502.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mdeberta_v3_base_mqnli_en_5.5.0_3.0_1725590322502.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DeBertaForSequenceClassification.pretrained("mdeberta_v3_base_mqnli","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DeBertaForSequenceClassification.pretrained("mdeberta_v3_base_mqnli", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mdeberta_v3_base_mqnli| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|955.4 MB| + +## References + +https://huggingface.co/SachinPatel248/mdeberta-v3-base-mqnli \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-mo_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-mo_pipeline_en.md new file mode 100644 index 00000000000000..4f6be5ed2af7b4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-mo_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English mo_pipeline pipeline DistilBertEmbeddings from Dinithi +author: John Snow Labs +name: mo_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mo_pipeline` is a English model originally trained by Dinithi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mo_pipeline_en_5.5.0_3.0_1725665057742.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mo_pipeline_en_5.5.0_3.0_1725665057742.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("mo_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("mo_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mo_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/Dinithi/Mo + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-model_zip_en.md b/docs/_posts/ahmedlone127/2024-09-06-model_zip_en.md new file mode 100644 index 00000000000000..987422dddc2a46 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-model_zip_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English model_zip DistilBertEmbeddings from mal-sh +author: John Snow Labs +name: model_zip +date: 2024-09-06 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`model_zip` is a English model originally trained by mal-sh. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/model_zip_en_5.5.0_3.0_1725664918810.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/model_zip_en_5.5.0_3.0_1725664918810.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("model_zip","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("model_zip","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|model_zip| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/mal-sh/model.zip \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-mpnet_80k_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-mpnet_80k_pipeline_en.md new file mode 100644 index 00000000000000..1e1e25d38e37fe --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-mpnet_80k_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English mpnet_80k_pipeline pipeline MPNetEmbeddings from heka-ai +author: John Snow Labs +name: mpnet_80k_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mpnet_80k_pipeline` is a English model originally trained by heka-ai. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mpnet_80k_pipeline_en_5.5.0_3.0_1725595586544.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mpnet_80k_pipeline_en_5.5.0_3.0_1725595586544.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("mpnet_80k_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("mpnet_80k_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mpnet_80k_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|407.0 MB| + +## References + +https://huggingface.co/heka-ai/mpnet-80k + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-nepal_bhasa_dummy_model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-nepal_bhasa_dummy_model_pipeline_en.md new file mode 100644 index 00000000000000..871bb7643a8919 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-nepal_bhasa_dummy_model_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English nepal_bhasa_dummy_model_pipeline pipeline CamemBertEmbeddings from gulabpatel +author: John Snow Labs +name: nepal_bhasa_dummy_model_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`nepal_bhasa_dummy_model_pipeline` is a English model originally trained by gulabpatel. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/nepal_bhasa_dummy_model_pipeline_en_5.5.0_3.0_1725633052786.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/nepal_bhasa_dummy_model_pipeline_en_5.5.0_3.0_1725633052786.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("nepal_bhasa_dummy_model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("nepal_bhasa_dummy_model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|nepal_bhasa_dummy_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/gulabpatel/new-dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-ner_model_cwchang_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-ner_model_cwchang_pipeline_en.md new file mode 100644 index 00000000000000..11514449d5170c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-ner_model_cwchang_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English ner_model_cwchang_pipeline pipeline DistilBertForTokenClassification from cwchang +author: John Snow Labs +name: ner_model_cwchang_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ner_model_cwchang_pipeline` is a English model originally trained by cwchang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ner_model_cwchang_pipeline_en_5.5.0_3.0_1725653995321.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ner_model_cwchang_pipeline_en_5.5.0_3.0_1725653995321.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("ner_model_cwchang_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("ner_model_cwchang_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ner_model_cwchang_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|505.5 MB| + +## References + +https://huggingface.co/cwchang/ner_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-ope_bert_v2_1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-ope_bert_v2_1_pipeline_en.md new file mode 100644 index 00000000000000..b7c13553e778e2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-ope_bert_v2_1_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English ope_bert_v2_1_pipeline pipeline BertEmbeddings from RyotaroOKabe +author: John Snow Labs +name: ope_bert_v2_1_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ope_bert_v2_1_pipeline` is a English model originally trained by RyotaroOKabe. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ope_bert_v2_1_pipeline_en_5.5.0_3.0_1725614473293.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ope_bert_v2_1_pipeline_en_5.5.0_3.0_1725614473293.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("ope_bert_v2_1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("ope_bert_v2_1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ope_bert_v2_1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|407.3 MB| + +## References + +https://huggingface.co/RyotaroOKabe/ope_bert_v2.1 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-opus_maltese_english_chinese_finetuned_english_tonga_tonga_islands_chinese_galgame_en.md b/docs/_posts/ahmedlone127/2024-09-06-opus_maltese_english_chinese_finetuned_english_tonga_tonga_islands_chinese_galgame_en.md new file mode 100644 index 00000000000000..ac3eba6901d748 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-opus_maltese_english_chinese_finetuned_english_tonga_tonga_islands_chinese_galgame_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_english_chinese_finetuned_english_tonga_tonga_islands_chinese_galgame MarianTransformer from huhu233 +author: John Snow Labs +name: opus_maltese_english_chinese_finetuned_english_tonga_tonga_islands_chinese_galgame +date: 2024-09-06 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_chinese_finetuned_english_tonga_tonga_islands_chinese_galgame` is a English model originally trained by huhu233. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_chinese_finetuned_english_tonga_tonga_islands_chinese_galgame_en_5.5.0_3.0_1725635978257.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_chinese_finetuned_english_tonga_tonga_islands_chinese_galgame_en_5.5.0_3.0_1725635978257.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_english_chinese_finetuned_english_tonga_tonga_islands_chinese_galgame","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_english_chinese_finetuned_english_tonga_tonga_islands_chinese_galgame","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_chinese_finetuned_english_tonga_tonga_islands_chinese_galgame| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|540.7 MB| + +## References + +https://huggingface.co/huhu233/opus-mt-en-zh-finetuned-en-to-zh-galgame \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-opus_maltese_english_multiple_languages_finetuned_english_tonga_tonga_islands_thai_pong_en.md b/docs/_posts/ahmedlone127/2024-09-06-opus_maltese_english_multiple_languages_finetuned_english_tonga_tonga_islands_thai_pong_en.md new file mode 100644 index 00000000000000..cdb6f7fc5ed251 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-opus_maltese_english_multiple_languages_finetuned_english_tonga_tonga_islands_thai_pong_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_english_multiple_languages_finetuned_english_tonga_tonga_islands_thai_pong MarianTransformer from pong +author: John Snow Labs +name: opus_maltese_english_multiple_languages_finetuned_english_tonga_tonga_islands_thai_pong +date: 2024-09-06 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_multiple_languages_finetuned_english_tonga_tonga_islands_thai_pong` is a English model originally trained by pong. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_multiple_languages_finetuned_english_tonga_tonga_islands_thai_pong_en_5.5.0_3.0_1725635704566.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_multiple_languages_finetuned_english_tonga_tonga_islands_thai_pong_en_5.5.0_3.0_1725635704566.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_english_multiple_languages_finetuned_english_tonga_tonga_islands_thai_pong","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_english_multiple_languages_finetuned_english_tonga_tonga_islands_thai_pong","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_multiple_languages_finetuned_english_tonga_tonga_islands_thai_pong| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|530.2 MB| + +## References + +https://huggingface.co/pong/opus-mt-en-mul-finetuned-en-to-th \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-opus_maltese_russian_english_end_tonga_tonga_islands_end_russian_tonga_tonga_islands_english_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-opus_maltese_russian_english_end_tonga_tonga_islands_end_russian_tonga_tonga_islands_english_pipeline_en.md new file mode 100644 index 00000000000000..9704c83c5d4ddb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-opus_maltese_russian_english_end_tonga_tonga_islands_end_russian_tonga_tonga_islands_english_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_russian_english_end_tonga_tonga_islands_end_russian_tonga_tonga_islands_english_pipeline pipeline MarianTransformer from Dentikka +author: John Snow Labs +name: opus_maltese_russian_english_end_tonga_tonga_islands_end_russian_tonga_tonga_islands_english_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_russian_english_end_tonga_tonga_islands_end_russian_tonga_tonga_islands_english_pipeline` is a English model originally trained by Dentikka. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_russian_english_end_tonga_tonga_islands_end_russian_tonga_tonga_islands_english_pipeline_en_5.5.0_3.0_1725636334648.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_russian_english_end_tonga_tonga_islands_end_russian_tonga_tonga_islands_english_pipeline_en_5.5.0_3.0_1725636334648.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_russian_english_end_tonga_tonga_islands_end_russian_tonga_tonga_islands_english_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_russian_english_end_tonga_tonga_islands_end_russian_tonga_tonga_islands_english_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_russian_english_end_tonga_tonga_islands_end_russian_tonga_tonga_islands_english_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|525.7 MB| + +## References + +https://huggingface.co/Dentikka/opus-mt-ru-en-end-to-end-ru-to-en + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-politics_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-politics_pipeline_en.md new file mode 100644 index 00000000000000..679b9a837527a0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-politics_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English politics_pipeline pipeline RoBertaEmbeddings from launch +author: John Snow Labs +name: politics_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`politics_pipeline` is a English model originally trained by launch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/politics_pipeline_en_5.5.0_3.0_1725661233849.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/politics_pipeline_en_5.5.0_3.0_1725661233849.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("politics_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("politics_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|politics_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|466.3 MB| + +## References + +https://huggingface.co/launch/POLITICS + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-qa_synth_data_with_unanswerable_23_aug_xlm_roberta_base_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-qa_synth_data_with_unanswerable_23_aug_xlm_roberta_base_pipeline_en.md new file mode 100644 index 00000000000000..7006610b48a66a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-qa_synth_data_with_unanswerable_23_aug_xlm_roberta_base_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English qa_synth_data_with_unanswerable_23_aug_xlm_roberta_base_pipeline pipeline XlmRoBertaForQuestionAnswering from am-infoweb +author: John Snow Labs +name: qa_synth_data_with_unanswerable_23_aug_xlm_roberta_base_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`qa_synth_data_with_unanswerable_23_aug_xlm_roberta_base_pipeline` is a English model originally trained by am-infoweb. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/qa_synth_data_with_unanswerable_23_aug_xlm_roberta_base_pipeline_en_5.5.0_3.0_1725640391344.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/qa_synth_data_with_unanswerable_23_aug_xlm_roberta_base_pipeline_en_5.5.0_3.0_1725640391344.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("qa_synth_data_with_unanswerable_23_aug_xlm_roberta_base_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("qa_synth_data_with_unanswerable_23_aug_xlm_roberta_base_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|qa_synth_data_with_unanswerable_23_aug_xlm_roberta_base_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|801.1 MB| + +## References + +https://huggingface.co/am-infoweb/QA_SYNTH_DATA_WITH_UNANSWERABLE_23_AUG_xlm_roberta-base + +## Included Models + +- MultiDocumentAssembler +- XlmRoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-qa_synthetic_data_only_18_aug_xlm_roberta_base_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-qa_synthetic_data_only_18_aug_xlm_roberta_base_pipeline_en.md new file mode 100644 index 00000000000000..d0cff2c5809176 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-qa_synthetic_data_only_18_aug_xlm_roberta_base_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English qa_synthetic_data_only_18_aug_xlm_roberta_base_pipeline pipeline XlmRoBertaForQuestionAnswering from am-infoweb +author: John Snow Labs +name: qa_synthetic_data_only_18_aug_xlm_roberta_base_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`qa_synthetic_data_only_18_aug_xlm_roberta_base_pipeline` is a English model originally trained by am-infoweb. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/qa_synthetic_data_only_18_aug_xlm_roberta_base_pipeline_en_5.5.0_3.0_1725598531976.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/qa_synthetic_data_only_18_aug_xlm_roberta_base_pipeline_en_5.5.0_3.0_1725598531976.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("qa_synthetic_data_only_18_aug_xlm_roberta_base_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("qa_synthetic_data_only_18_aug_xlm_roberta_base_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|qa_synthetic_data_only_18_aug_xlm_roberta_base_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|802.1 MB| + +## References + +https://huggingface.co/am-infoweb/QA_SYNTHETIC_DATA_ONLY_18_AUG_xlm-roberta-base + +## Included Models + +- MultiDocumentAssembler +- XlmRoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-question_answer_thirdeyedata_en.md b/docs/_posts/ahmedlone127/2024-09-06-question_answer_thirdeyedata_en.md new file mode 100644 index 00000000000000..fb34d5010fe966 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-question_answer_thirdeyedata_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English question_answer_thirdeyedata DistilBertForQuestionAnswering from ThirdEyeData +author: John Snow Labs +name: question_answer_thirdeyedata +date: 2024-09-06 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`question_answer_thirdeyedata` is a English model originally trained by ThirdEyeData. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/question_answer_thirdeyedata_en_5.5.0_3.0_1725621832036.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/question_answer_thirdeyedata_en_5.5.0_3.0_1725621832036.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("question_answer_thirdeyedata","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("question_answer_thirdeyedata", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|question_answer_thirdeyedata| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/ThirdEyeData/Question_Answer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-rotten_tomatoes_microsoft_deberta_v3_base_seed_2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-rotten_tomatoes_microsoft_deberta_v3_base_seed_2_pipeline_en.md new file mode 100644 index 00000000000000..7840f0ab643032 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-rotten_tomatoes_microsoft_deberta_v3_base_seed_2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English rotten_tomatoes_microsoft_deberta_v3_base_seed_2_pipeline pipeline DeBertaForSequenceClassification from utahnlp +author: John Snow Labs +name: rotten_tomatoes_microsoft_deberta_v3_base_seed_2_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`rotten_tomatoes_microsoft_deberta_v3_base_seed_2_pipeline` is a English model originally trained by utahnlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/rotten_tomatoes_microsoft_deberta_v3_base_seed_2_pipeline_en_5.5.0_3.0_1725588897696.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/rotten_tomatoes_microsoft_deberta_v3_base_seed_2_pipeline_en_5.5.0_3.0_1725588897696.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("rotten_tomatoes_microsoft_deberta_v3_base_seed_2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("rotten_tomatoes_microsoft_deberta_v3_base_seed_2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|rotten_tomatoes_microsoft_deberta_v3_base_seed_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|577.9 MB| + +## References + +https://huggingface.co/utahnlp/rotten_tomatoes_microsoft_deberta-v3-base_seed-2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-sst2_microsoft_deberta_v3_base_seed_2_en.md b/docs/_posts/ahmedlone127/2024-09-06-sst2_microsoft_deberta_v3_base_seed_2_en.md new file mode 100644 index 00000000000000..fc74b03b7b354e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-sst2_microsoft_deberta_v3_base_seed_2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sst2_microsoft_deberta_v3_base_seed_2 DeBertaForSequenceClassification from utahnlp +author: John Snow Labs +name: sst2_microsoft_deberta_v3_base_seed_2 +date: 2024-09-06 +tags: [en, open_source, onnx, sequence_classification, deberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sst2_microsoft_deberta_v3_base_seed_2` is a English model originally trained by utahnlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sst2_microsoft_deberta_v3_base_seed_2_en_5.5.0_3.0_1725609729613.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sst2_microsoft_deberta_v3_base_seed_2_en_5.5.0_3.0_1725609729613.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DeBertaForSequenceClassification.pretrained("sst2_microsoft_deberta_v3_base_seed_2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DeBertaForSequenceClassification.pretrained("sst2_microsoft_deberta_v3_base_seed_2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sst2_microsoft_deberta_v3_base_seed_2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|574.1 MB| + +## References + +https://huggingface.co/utahnlp/sst2_microsoft_deberta-v3-base_seed-2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-testarbaraz_en.md b/docs/_posts/ahmedlone127/2024-09-06-testarbaraz_en.md new file mode 100644 index 00000000000000..0eaaa28a743706 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-testarbaraz_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English testarbaraz DistilBertForQuestionAnswering from kevinbram +author: John Snow Labs +name: testarbaraz +date: 2024-09-06 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`testarbaraz` is a English model originally trained by kevinbram. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/testarbaraz_en_5.5.0_3.0_1725654578323.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/testarbaraz_en_5.5.0_3.0_1725654578323.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("testarbaraz","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("testarbaraz", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|testarbaraz| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/kevinbram/testarbaraz \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-text_complexity_roberta_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-text_complexity_roberta_pipeline_en.md new file mode 100644 index 00000000000000..a7d912057f72a0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-text_complexity_roberta_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English text_complexity_roberta_pipeline pipeline XlmRoBertaForTokenClassification from k0nv1ct +author: John Snow Labs +name: text_complexity_roberta_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`text_complexity_roberta_pipeline` is a English model originally trained by k0nv1ct. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/text_complexity_roberta_pipeline_en_5.5.0_3.0_1725591938952.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/text_complexity_roberta_pipeline_en_5.5.0_3.0_1725591938952.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("text_complexity_roberta_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("text_complexity_roberta_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|text_complexity_roberta_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|805.9 MB| + +## References + +https://huggingface.co/k0nv1ct/text-complexity-roberta + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-torch_distilbert_policies_comparison_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-torch_distilbert_policies_comparison_pipeline_en.md new file mode 100644 index 00000000000000..abac95bdbcca39 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-torch_distilbert_policies_comparison_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English torch_distilbert_policies_comparison_pipeline pipeline DistilBertForSequenceClassification from rubivivi +author: John Snow Labs +name: torch_distilbert_policies_comparison_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`torch_distilbert_policies_comparison_pipeline` is a English model originally trained by rubivivi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/torch_distilbert_policies_comparison_pipeline_en_5.5.0_3.0_1725607851230.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/torch_distilbert_policies_comparison_pipeline_en_5.5.0_3.0_1725607851230.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("torch_distilbert_policies_comparison_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("torch_distilbert_policies_comparison_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|torch_distilbert_policies_comparison_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/rubivivi/torch_distilbert_policies_comparison + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-trained_english_en.md b/docs/_posts/ahmedlone127/2024-09-06-trained_english_en.md new file mode 100644 index 00000000000000..9f9bba32c0f28a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-trained_english_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English trained_english DistilBertForTokenClassification from annamariagnat +author: John Snow Labs +name: trained_english +date: 2024-09-06 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`trained_english` is a English model originally trained by annamariagnat. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/trained_english_en_5.5.0_3.0_1725599514241.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/trained_english_en_5.5.0_3.0_1725599514241.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("trained_english","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("trained_english", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|trained_english| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|505.4 MB| + +## References + +https://huggingface.co/annamariagnat/trained_english \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-whisper_tiny_german_primeline_de.md b/docs/_posts/ahmedlone127/2024-09-06-whisper_tiny_german_primeline_de.md new file mode 100644 index 00000000000000..427588622bbe42 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-whisper_tiny_german_primeline_de.md @@ -0,0 +1,84 @@ +--- +layout: model +title: German whisper_tiny_german_primeline WhisperForCTC from primeline +author: John Snow Labs +name: whisper_tiny_german_primeline +date: 2024-09-06 +tags: [de, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: de +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_tiny_german_primeline` is a German model originally trained by primeline. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_tiny_german_primeline_de_5.5.0_3.0_1725586306741.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_tiny_german_primeline_de_5.5.0_3.0_1725586306741.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("whisper_tiny_german_primeline","de") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("whisper_tiny_german_primeline", "de") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_tiny_german_primeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|de| +|Size:|187.5 MB| + +## References + +https://huggingface.co/primeline/whisper-tiny-german \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_base_finetuned_panx_french_transformersbook_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_base_finetuned_panx_french_transformersbook_pipeline_en.md new file mode 100644 index 00000000000000..6f92c1897b0bb8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_base_finetuned_panx_french_transformersbook_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_transformersbook_pipeline pipeline XlmRoBertaForTokenClassification from transformersbook +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_transformersbook_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_transformersbook_pipeline` is a English model originally trained by transformersbook. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_transformersbook_pipeline_en_5.5.0_3.0_1725658018558.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_transformersbook_pipeline_en_5.5.0_3.0_1725658018558.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_french_transformersbook_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_french_transformersbook_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_transformersbook_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|840.9 MB| + +## References + +https://huggingface.co/transformersbook/xlm-roberta-base-finetuned-panx-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_base_finetuned_rugo_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_base_finetuned_rugo_pipeline_en.md new file mode 100644 index 00000000000000..14927c07290b19 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_base_finetuned_rugo_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_rugo_pipeline pipeline XlmRoBertaEmbeddings from rugo +author: John Snow Labs +name: xlm_roberta_base_finetuned_rugo_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_rugo_pipeline` is a English model originally trained by rugo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_rugo_pipeline_en_5.5.0_3.0_1725596585073.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_rugo_pipeline_en_5.5.0_3.0_1725596585073.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_rugo_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_rugo_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_rugo_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|987.4 MB| + +## References + +https://huggingface.co/rugo/xlm-roberta-base-finetuned + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_base_hungarian_ner_huner_pipeline_hu.md b/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_base_hungarian_ner_huner_pipeline_hu.md new file mode 100644 index 00000000000000..1c4202eeedc051 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_base_hungarian_ner_huner_pipeline_hu.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Hungarian xlm_roberta_base_hungarian_ner_huner_pipeline pipeline XlmRoBertaForTokenClassification from EvanD +author: John Snow Labs +name: xlm_roberta_base_hungarian_ner_huner_pipeline +date: 2024-09-06 +tags: [hu, open_source, pipeline, onnx] +task: Named Entity Recognition +language: hu +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_hungarian_ner_huner_pipeline` is a Hungarian model originally trained by EvanD. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_hungarian_ner_huner_pipeline_hu_5.5.0_3.0_1725656559694.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_hungarian_ner_huner_pipeline_hu_5.5.0_3.0_1725656559694.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_hungarian_ner_huner_pipeline", lang = "hu") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_hungarian_ner_huner_pipeline", lang = "hu") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_hungarian_ner_huner_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|hu| +|Size:|784.0 MB| + +## References + +https://huggingface.co/EvanD/xlm-roberta-base-hungarian-ner-huner + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_base_russian_sentiment_rusentiment_ru.md b/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_base_russian_sentiment_rusentiment_ru.md new file mode 100644 index 00000000000000..43b96cd851750d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_base_russian_sentiment_rusentiment_ru.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Russian xlm_roberta_base_russian_sentiment_rusentiment XlmRoBertaForSequenceClassification from sismetanin +author: John Snow Labs +name: xlm_roberta_base_russian_sentiment_rusentiment +date: 2024-09-06 +tags: [ru, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: ru +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_russian_sentiment_rusentiment` is a Russian model originally trained by sismetanin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_russian_sentiment_rusentiment_ru_5.5.0_3.0_1725617348477.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_russian_sentiment_rusentiment_ru_5.5.0_3.0_1725617348477.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_base_russian_sentiment_rusentiment","ru") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_base_russian_sentiment_rusentiment", "ru") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_russian_sentiment_rusentiment| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|ru| +|Size:|799.7 MB| + +## References + +https://huggingface.co/sismetanin/xlm_roberta_base-ru-sentiment-rusentiment \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_qa_Part_2_XLM_Model_E1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_qa_Part_2_XLM_Model_E1_pipeline_en.md new file mode 100644 index 00000000000000..64d729fbeeb0d6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-xlm_roberta_qa_Part_2_XLM_Model_E1_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English xlm_roberta_qa_Part_2_XLM_Model_E1_pipeline pipeline XlmRoBertaForQuestionAnswering from horsbug98 +author: John Snow Labs +name: xlm_roberta_qa_Part_2_XLM_Model_E1_pipeline +date: 2024-09-06 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_qa_Part_2_XLM_Model_E1_pipeline` is a English model originally trained by horsbug98. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_qa_Part_2_XLM_Model_E1_pipeline_en_5.5.0_3.0_1725640624793.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_qa_Part_2_XLM_Model_E1_pipeline_en_5.5.0_3.0_1725640624793.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_qa_Part_2_XLM_Model_E1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_qa_Part_2_XLM_Model_E1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_qa_Part_2_XLM_Model_E1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|814.1 MB| + +## References + +https://huggingface.co/horsbug98/Part_2_XLM_Model_E1 + +## Included Models + +- MultiDocumentAssembler +- XlmRoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-06-xlmroberta_ner_base_finetuned_ner_swahili_pipeline_sw.md b/docs/_posts/ahmedlone127/2024-09-06-xlmroberta_ner_base_finetuned_ner_swahili_pipeline_sw.md new file mode 100644 index 00000000000000..10759b76ce866d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-06-xlmroberta_ner_base_finetuned_ner_swahili_pipeline_sw.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Swahili (macrolanguage) xlmroberta_ner_base_finetuned_ner_swahili_pipeline pipeline XlmRoBertaForTokenClassification from mbeukman +author: John Snow Labs +name: xlmroberta_ner_base_finetuned_ner_swahili_pipeline +date: 2024-09-06 +tags: [sw, open_source, pipeline, onnx] +task: Named Entity Recognition +language: sw +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlmroberta_ner_base_finetuned_ner_swahili_pipeline` is a Swahili (macrolanguage) model originally trained by mbeukman. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_base_finetuned_ner_swahili_pipeline_sw_5.5.0_3.0_1725657551804.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_base_finetuned_ner_swahili_pipeline_sw_5.5.0_3.0_1725657551804.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlmroberta_ner_base_finetuned_ner_swahili_pipeline", lang = "sw") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlmroberta_ner_base_finetuned_ner_swahili_pipeline", lang = "sw") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlmroberta_ner_base_finetuned_ner_swahili_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|sw| +|Size:|776.7 MB| + +## References + +https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-ner-swahili + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-action_policy_plans_classifier_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-action_policy_plans_classifier_pipeline_en.md new file mode 100644 index 00000000000000..262dd6f307b609 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-action_policy_plans_classifier_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English action_policy_plans_classifier_pipeline pipeline MPNetForSequenceClassification from ppsingh +author: John Snow Labs +name: action_policy_plans_classifier_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`action_policy_plans_classifier_pipeline` is a English model originally trained by ppsingh. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/action_policy_plans_classifier_pipeline_en_5.5.0_3.0_1725733479711.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/action_policy_plans_classifier_pipeline_en_5.5.0_3.0_1725733479711.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("action_policy_plans_classifier_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("action_policy_plans_classifier_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|action_policy_plans_classifier_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|409.2 MB| + +## References + +https://huggingface.co/ppsingh/action-policy-plans-classifier + +## Included Models + +- DocumentAssembler +- TokenizerModel +- MPNetForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-adress_parser_model_epochs_en.md b/docs/_posts/ahmedlone127/2024-09-07-adress_parser_model_epochs_en.md new file mode 100644 index 00000000000000..0115dd66a1310d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-adress_parser_model_epochs_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English adress_parser_model_epochs DistilBertForTokenClassification from ManuelMM +author: John Snow Labs +name: adress_parser_model_epochs +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`adress_parser_model_epochs` is a English model originally trained by ManuelMM. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/adress_parser_model_epochs_en_5.5.0_3.0_1725730526577.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/adress_parser_model_epochs_en_5.5.0_3.0_1725730526577.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("adress_parser_model_epochs","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("adress_parser_model_epochs", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|adress_parser_model_epochs| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/ManuelMM/adress_parser_model_epochs \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-anus_wanus_panus_ranus_en.md b/docs/_posts/ahmedlone127/2024-09-07-anus_wanus_panus_ranus_en.md new file mode 100644 index 00000000000000..71097e34c5bb12 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-anus_wanus_panus_ranus_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English anus_wanus_panus_ranus DistilBertForSequenceClassification from namebobb +author: John Snow Labs +name: anus_wanus_panus_ranus +date: 2024-09-07 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`anus_wanus_panus_ranus` is a English model originally trained by namebobb. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/anus_wanus_panus_ranus_en_5.5.0_3.0_1725674939870.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/anus_wanus_panus_ranus_en_5.5.0_3.0_1725674939870.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("anus_wanus_panus_ranus","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("anus_wanus_panus_ranus", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|anus_wanus_panus_ranus| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/namebobb/anus-wanus-panus-ranus \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-bert_base_cased_ner_conll2003_en.md b/docs/_posts/ahmedlone127/2024-09-07-bert_base_cased_ner_conll2003_en.md new file mode 100644 index 00000000000000..1fd71afebe68aa --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-bert_base_cased_ner_conll2003_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bert_base_cased_ner_conll2003 BertForTokenClassification from andi611 +author: John Snow Labs +name: bert_base_cased_ner_conll2003 +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, bert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_cased_ner_conll2003` is a English model originally trained by andi611. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_cased_ner_conll2003_en_5.5.0_3.0_1725726776945.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_cased_ner_conll2003_en_5.5.0_3.0_1725726776945.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = BertForTokenClassification.pretrained("bert_base_cased_ner_conll2003","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = BertForTokenClassification.pretrained("bert_base_cased_ner_conll2003", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_cased_ner_conll2003| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|403.7 MB| + +## References + +https://huggingface.co/andi611/bert-base-cased-ner-conll2003 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-bert_base_dutch_cased_finetuned_mbert_finetuned_ner_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-bert_base_dutch_cased_finetuned_mbert_finetuned_ner_pipeline_en.md new file mode 100644 index 00000000000000..c9a64f5dfa7c5c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-bert_base_dutch_cased_finetuned_mbert_finetuned_ner_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bert_base_dutch_cased_finetuned_mbert_finetuned_ner_pipeline pipeline DistilBertForTokenClassification from sindhujag26 +author: John Snow Labs +name: bert_base_dutch_cased_finetuned_mbert_finetuned_ner_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_dutch_cased_finetuned_mbert_finetuned_ner_pipeline` is a English model originally trained by sindhujag26. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_dutch_cased_finetuned_mbert_finetuned_ner_pipeline_en_5.5.0_3.0_1725734181855.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_dutch_cased_finetuned_mbert_finetuned_ner_pipeline_en_5.5.0_3.0_1725734181855.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_base_dutch_cased_finetuned_mbert_finetuned_ner_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_base_dutch_cased_finetuned_mbert_finetuned_ner_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_dutch_cased_finetuned_mbert_finetuned_ner_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|505.4 MB| + +## References + +https://huggingface.co/sindhujag26/bert-base-dutch-cased-finetuned-mBERT-finetuned-ner + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-bert_base_turkish_uncased_ner_pipeline_tr.md b/docs/_posts/ahmedlone127/2024-09-07-bert_base_turkish_uncased_ner_pipeline_tr.md new file mode 100644 index 00000000000000..96456f68218f4f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-bert_base_turkish_uncased_ner_pipeline_tr.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Turkish bert_base_turkish_uncased_ner_pipeline pipeline BertForTokenClassification from saribasmetehan +author: John Snow Labs +name: bert_base_turkish_uncased_ner_pipeline +date: 2024-09-07 +tags: [tr, open_source, pipeline, onnx] +task: Named Entity Recognition +language: tr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_turkish_uncased_ner_pipeline` is a Turkish model originally trained by saribasmetehan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_turkish_uncased_ner_pipeline_tr_5.5.0_3.0_1725726110412.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_turkish_uncased_ner_pipeline_tr_5.5.0_3.0_1725726110412.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_base_turkish_uncased_ner_pipeline", lang = "tr") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_base_turkish_uncased_ner_pipeline", lang = "tr") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_turkish_uncased_ner_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|tr| +|Size:|412.6 MB| + +## References + +https://huggingface.co/saribasmetehan/bert-base-turkish-uncased-ner + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-bert_finetuned_ner_skyimple_en.md b/docs/_posts/ahmedlone127/2024-09-07-bert_finetuned_ner_skyimple_en.md new file mode 100644 index 00000000000000..815789cb4f2635 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-bert_finetuned_ner_skyimple_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bert_finetuned_ner_skyimple BertForTokenClassification from skyimple +author: John Snow Labs +name: bert_finetuned_ner_skyimple +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, bert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_finetuned_ner_skyimple` is a English model originally trained by skyimple. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_finetuned_ner_skyimple_en_5.5.0_3.0_1725734811457.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_finetuned_ner_skyimple_en_5.5.0_3.0_1725734811457.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = BertForTokenClassification.pretrained("bert_finetuned_ner_skyimple","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = BertForTokenClassification.pretrained("bert_finetuned_ner_skyimple", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_finetuned_ner_skyimple| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|403.7 MB| + +## References + +https://huggingface.co/skyimple/bert-finetuned-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-bert_finetuned_squad_chaii_en.md b/docs/_posts/ahmedlone127/2024-09-07-bert_finetuned_squad_chaii_en.md new file mode 100644 index 00000000000000..154fac047e2ce5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-bert_finetuned_squad_chaii_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English bert_finetuned_squad_chaii XlmRoBertaForQuestionAnswering from SmartPy +author: John Snow Labs +name: bert_finetuned_squad_chaii +date: 2024-09-07 +tags: [en, open_source, onnx, question_answering, xlm_roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_finetuned_squad_chaii` is a English model originally trained by SmartPy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_finetuned_squad_chaii_en_5.5.0_3.0_1725685790589.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_finetuned_squad_chaii_en_5.5.0_3.0_1725685790589.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = XlmRoBertaForQuestionAnswering.pretrained("bert_finetuned_squad_chaii","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = XlmRoBertaForQuestionAnswering.pretrained("bert_finetuned_squad_chaii", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_finetuned_squad_chaii| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|886.7 MB| + +## References + +https://huggingface.co/SmartPy/bert-finetuned-squad-chaii \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-biodivbert_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-biodivbert_pipeline_en.md new file mode 100644 index 00000000000000..7cfc45a542aa86 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-biodivbert_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English biodivbert_pipeline pipeline BertForTokenClassification from NoYo25 +author: John Snow Labs +name: biodivbert_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`biodivbert_pipeline` is a English model originally trained by NoYo25. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/biodivbert_pipeline_en_5.5.0_3.0_1725735191782.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/biodivbert_pipeline_en_5.5.0_3.0_1725735191782.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("biodivbert_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("biodivbert_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|biodivbert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|403.6 MB| + +## References + +https://huggingface.co/NoYo25/BiodivBERT + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-brwac_v1_2__checkpoint_last_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-brwac_v1_2__checkpoint_last_pipeline_en.md new file mode 100644 index 00000000000000..f0266678081287 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-brwac_v1_2__checkpoint_last_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English brwac_v1_2__checkpoint_last_pipeline pipeline RoBertaEmbeddings from eduagarcia-temp +author: John Snow Labs +name: brwac_v1_2__checkpoint_last_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`brwac_v1_2__checkpoint_last_pipeline` is a English model originally trained by eduagarcia-temp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/brwac_v1_2__checkpoint_last_pipeline_en_5.5.0_3.0_1725716536315.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/brwac_v1_2__checkpoint_last_pipeline_en_5.5.0_3.0_1725716536315.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("brwac_v1_2__checkpoint_last_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("brwac_v1_2__checkpoint_last_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|brwac_v1_2__checkpoint_last_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|298.3 MB| + +## References + +https://huggingface.co/eduagarcia-temp/brwac_v1_2__checkpoint_last + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-bsc_bio_ehr_spanish_carmen_farmaco_es.md b/docs/_posts/ahmedlone127/2024-09-07-bsc_bio_ehr_spanish_carmen_farmaco_es.md new file mode 100644 index 00000000000000..1f8239ce79bfad --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-bsc_bio_ehr_spanish_carmen_farmaco_es.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Castilian, Spanish bsc_bio_ehr_spanish_carmen_farmaco RoBertaForTokenClassification from BSC-NLP4BIA +author: John Snow Labs +name: bsc_bio_ehr_spanish_carmen_farmaco +date: 2024-09-07 +tags: [es, open_source, onnx, token_classification, roberta, ner] +task: Named Entity Recognition +language: es +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bsc_bio_ehr_spanish_carmen_farmaco` is a Castilian, Spanish model originally trained by BSC-NLP4BIA. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bsc_bio_ehr_spanish_carmen_farmaco_es_5.5.0_3.0_1725723393534.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bsc_bio_ehr_spanish_carmen_farmaco_es_5.5.0_3.0_1725723393534.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = RoBertaForTokenClassification.pretrained("bsc_bio_ehr_spanish_carmen_farmaco","es") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = RoBertaForTokenClassification.pretrained("bsc_bio_ehr_spanish_carmen_farmaco", "es") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bsc_bio_ehr_spanish_carmen_farmaco| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|es| +|Size:|438.1 MB| + +## References + +https://huggingface.co/BSC-NLP4BIA/bsc-bio-ehr-es-carmen-farmaco \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_distil_huner_model_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_distil_huner_model_en.md new file mode 100644 index 00000000000000..b20292ba98026f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_distil_huner_model_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_awesome_distil_huner_model DistilBertForTokenClassification from Balu94pratap +author: John Snow Labs +name: burmese_awesome_distil_huner_model +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_distil_huner_model` is a English model originally trained by Balu94pratap. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_distil_huner_model_en_5.5.0_3.0_1725731017719.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_distil_huner_model_en_5.5.0_3.0_1725731017719.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_distil_huner_model","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_distil_huner_model", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_distil_huner_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/Balu94pratap/my_awesome_distil_huner_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_pec_model2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_pec_model2_pipeline_en.md new file mode 100644 index 00000000000000..b4af7ec6cbfc9c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_pec_model2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_awesome_pec_model2_pipeline pipeline DistilBertForTokenClassification from PaulBin +author: John Snow Labs +name: burmese_awesome_pec_model2_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_pec_model2_pipeline` is a English model originally trained by PaulBin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_pec_model2_pipeline_en_5.5.0_3.0_1725730186036.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_pec_model2_pipeline_en_5.5.0_3.0_1725730186036.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_pec_model2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_pec_model2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_pec_model2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/PaulBin/my_awesome_PEC_model2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model3_pipeline_en.md new file mode 100644 index 00000000000000..7d8f5f610a4d9e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model3_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_awesome_qa_model3_pipeline pipeline DistilBertForQuestionAnswering from jvasdigital +author: John Snow Labs +name: burmese_awesome_qa_model3_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model3_pipeline` is a English model originally trained by jvasdigital. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model3_pipeline_en_5.5.0_3.0_1725736365975.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model3_pipeline_en_5.5.0_3.0_1725736365975.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_qa_model3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_qa_model3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/jvasdigital/my_awesome_qa_model3 + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_alisadavtyan_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_alisadavtyan_en.md new file mode 100644 index 00000000000000..b5493f7affc4e6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_alisadavtyan_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_alisadavtyan DistilBertForQuestionAnswering from AlisaDavtyan +author: John Snow Labs +name: burmese_awesome_qa_model_alisadavtyan +date: 2024-09-07 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_alisadavtyan` is a English model originally trained by AlisaDavtyan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_alisadavtyan_en_5.5.0_3.0_1725735676205.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_alisadavtyan_en_5.5.0_3.0_1725735676205.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_alisadavtyan","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_alisadavtyan", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_alisadavtyan| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/AlisaDavtyan/my_awesome_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_d29_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_d29_en.md new file mode 100644 index 00000000000000..46015ac41c8a32 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_d29_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_d29 DistilBertForQuestionAnswering from D29 +author: John Snow Labs +name: burmese_awesome_qa_model_d29 +date: 2024-09-07 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_d29` is a English model originally trained by D29. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_d29_en_5.5.0_3.0_1725722307567.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_d29_en_5.5.0_3.0_1725722307567.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_d29","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_d29", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_d29| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/D29/my_awesome_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_jackyfung00358_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_jackyfung00358_pipeline_en.md new file mode 100644 index 00000000000000..671474c0ec8149 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_jackyfung00358_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_awesome_qa_model_jackyfung00358_pipeline pipeline DistilBertForQuestionAnswering from jackyfung00358 +author: John Snow Labs +name: burmese_awesome_qa_model_jackyfung00358_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_jackyfung00358_pipeline` is a English model originally trained by jackyfung00358. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_jackyfung00358_pipeline_en_5.5.0_3.0_1725727384901.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_jackyfung00358_pipeline_en_5.5.0_3.0_1725727384901.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_qa_model_jackyfung00358_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_qa_model_jackyfung00358_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_jackyfung00358_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/jackyfung00358/my_awesome_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_krayray_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_krayray_en.md new file mode 100644 index 00000000000000..9da788ccb51160 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_krayray_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_krayray DistilBertForQuestionAnswering from KRayRay +author: John Snow Labs +name: burmese_awesome_qa_model_krayray +date: 2024-09-07 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_krayray` is a English model originally trained by KRayRay. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_krayray_en_5.5.0_3.0_1725695341018.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_krayray_en_5.5.0_3.0_1725695341018.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_krayray","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_krayray", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_krayray| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/KRayRay/my_awesome_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_laanhtu_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_laanhtu_en.md new file mode 100644 index 00000000000000..0cdffc9b169b1e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_laanhtu_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_laanhtu DistilBertForQuestionAnswering from laanhtu +author: John Snow Labs +name: burmese_awesome_qa_model_laanhtu +date: 2024-09-07 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_laanhtu` is a English model originally trained by laanhtu. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_laanhtu_en_5.5.0_3.0_1725722408093.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_laanhtu_en_5.5.0_3.0_1725722408093.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_laanhtu","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_laanhtu", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_laanhtu| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/laanhtu/my_awesome_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_ravinderbrai_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_ravinderbrai_pipeline_en.md new file mode 100644 index 00000000000000..8b62d5edebaa63 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_qa_model_ravinderbrai_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_awesome_qa_model_ravinderbrai_pipeline pipeline DistilBertForQuestionAnswering from ravinderbrai +author: John Snow Labs +name: burmese_awesome_qa_model_ravinderbrai_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_ravinderbrai_pipeline` is a English model originally trained by ravinderbrai. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_ravinderbrai_pipeline_en_5.5.0_3.0_1725722883140.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_ravinderbrai_pipeline_en_5.5.0_3.0_1725722883140.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_qa_model_ravinderbrai_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_qa_model_ravinderbrai_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_ravinderbrai_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/ravinderbrai/my_awesome_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_all_jgtt_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_all_jgtt_pipeline_en.md new file mode 100644 index 00000000000000..6a5053369a7cbb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_all_jgtt_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_awesome_wnut_all_jgtt_pipeline pipeline DistilBertForTokenClassification from gonzalezrostani +author: John Snow Labs +name: burmese_awesome_wnut_all_jgtt_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_all_jgtt_pipeline` is a English model originally trained by gonzalezrostani. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_all_jgtt_pipeline_en_5.5.0_3.0_1725729676856.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_all_jgtt_pipeline_en_5.5.0_3.0_1725729676856.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_wnut_all_jgtt_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_wnut_all_jgtt_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_all_jgtt_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/gonzalezrostani/my_awesome_wnut_all_JGTt + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_model_carlonos_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_model_carlonos_pipeline_en.md new file mode 100644 index 00000000000000..db39551cb62996 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_model_carlonos_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_awesome_wnut_model_carlonos_pipeline pipeline DistilBertForTokenClassification from Carlonos +author: John Snow Labs +name: burmese_awesome_wnut_model_carlonos_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_model_carlonos_pipeline` is a English model originally trained by Carlonos. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_carlonos_pipeline_en_5.5.0_3.0_1725739477180.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_carlonos_pipeline_en_5.5.0_3.0_1725739477180.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_wnut_model_carlonos_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_wnut_model_carlonos_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_model_carlonos_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/Carlonos/my_awesome_wnut_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_model_cc12171_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_model_cc12171_en.md new file mode 100644 index 00000000000000..31ae173851bd75 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_model_cc12171_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_awesome_wnut_model_cc12171 DistilBertForTokenClassification from CC12171 +author: John Snow Labs +name: burmese_awesome_wnut_model_cc12171 +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_model_cc12171` is a English model originally trained by CC12171. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_cc12171_en_5.5.0_3.0_1725729879893.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_cc12171_en_5.5.0_3.0_1725729879893.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_wnut_model_cc12171","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_wnut_model_cc12171", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_model_cc12171| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/CC12171/my_awesome_wnut_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_model_dkababgi_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_model_dkababgi_pipeline_en.md new file mode 100644 index 00000000000000..59b126378f57ff --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_model_dkababgi_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_awesome_wnut_model_dkababgi_pipeline pipeline DistilBertForTokenClassification from dkababgi +author: John Snow Labs +name: burmese_awesome_wnut_model_dkababgi_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_model_dkababgi_pipeline` is a English model originally trained by dkababgi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_dkababgi_pipeline_en_5.5.0_3.0_1725730164591.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_dkababgi_pipeline_en_5.5.0_3.0_1725730164591.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_wnut_model_dkababgi_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_wnut_model_dkababgi_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_model_dkababgi_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/dkababgi/my_awesome_wnut_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_model_stephen_osullivan_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_model_stephen_osullivan_en.md new file mode 100644 index 00000000000000..a035651487b8c3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_awesome_wnut_model_stephen_osullivan_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_awesome_wnut_model_stephen_osullivan DistilBertForTokenClassification from stephen-osullivan +author: John Snow Labs +name: burmese_awesome_wnut_model_stephen_osullivan +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_model_stephen_osullivan` is a English model originally trained by stephen-osullivan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_stephen_osullivan_en_5.5.0_3.0_1725739116012.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_stephen_osullivan_en_5.5.0_3.0_1725739116012.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_wnut_model_stephen_osullivan","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_wnut_model_stephen_osullivan", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_model_stephen_osullivan| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/stephen-osullivan/my_awesome_wnut_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_distilbert_model_qaicodes_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_distilbert_model_qaicodes_pipeline_en.md new file mode 100644 index 00000000000000..243a40372247bb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_distilbert_model_qaicodes_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_distilbert_model_qaicodes_pipeline pipeline DistilBertForSequenceClassification from qaicodes +author: John Snow Labs +name: burmese_distilbert_model_qaicodes_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_distilbert_model_qaicodes_pipeline` is a English model originally trained by qaicodes. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_distilbert_model_qaicodes_pipeline_en_5.5.0_3.0_1725674400539.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_distilbert_model_qaicodes_pipeline_en_5.5.0_3.0_1725674400539.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_distilbert_model_qaicodes_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_distilbert_model_qaicodes_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_distilbert_model_qaicodes_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/qaicodes/my_distilbert_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_nmt_model_antaraiiitd_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_nmt_model_antaraiiitd_en.md new file mode 100644 index 00000000000000..cfa013ad6d43f3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_nmt_model_antaraiiitd_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_nmt_model_antaraiiitd MarianTransformer from AntaraIIITD +author: John Snow Labs +name: burmese_nmt_model_antaraiiitd +date: 2024-09-07 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_nmt_model_antaraiiitd` is a English model originally trained by AntaraIIITD. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_nmt_model_antaraiiitd_en_5.5.0_3.0_1725747904184.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_nmt_model_antaraiiitd_en_5.5.0_3.0_1725747904184.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("burmese_nmt_model_antaraiiitd","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("burmese_nmt_model_antaraiiitd","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_nmt_model_antaraiiitd| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|500.2 MB| + +## References + +https://huggingface.co/AntaraIIITD/my_NMT_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_pii_model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_pii_model_pipeline_en.md new file mode 100644 index 00000000000000..fefd00a3deb1df --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_pii_model_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_pii_model_pipeline pipeline DistilBertForTokenClassification from shubhamgantayat +author: John Snow Labs +name: burmese_pii_model_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_pii_model_pipeline` is a English model originally trained by shubhamgantayat. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_pii_model_pipeline_en_5.5.0_3.0_1725739128146.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_pii_model_pipeline_en_5.5.0_3.0_1725739128146.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_pii_model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_pii_model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_pii_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/shubhamgantayat/my_pii_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_qa_model_arunkarthik_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_qa_model_arunkarthik_pipeline_en.md new file mode 100644 index 00000000000000..191a67ad4611c2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_qa_model_arunkarthik_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_qa_model_arunkarthik_pipeline pipeline DistilBertForQuestionAnswering from arunkarthik +author: John Snow Labs +name: burmese_qa_model_arunkarthik_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_qa_model_arunkarthik_pipeline` is a English model originally trained by arunkarthik. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_qa_model_arunkarthik_pipeline_en_5.5.0_3.0_1725746080528.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_qa_model_arunkarthik_pipeline_en_5.5.0_3.0_1725746080528.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_qa_model_arunkarthik_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_qa_model_arunkarthik_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_qa_model_arunkarthik_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/arunkarthik/my_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-burmese_wnut_model_samasedaghat_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-burmese_wnut_model_samasedaghat_pipeline_en.md new file mode 100644 index 00000000000000..8375f0001964a1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-burmese_wnut_model_samasedaghat_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_wnut_model_samasedaghat_pipeline pipeline DistilBertForTokenClassification from SamaSedaghat +author: John Snow Labs +name: burmese_wnut_model_samasedaghat_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_wnut_model_samasedaghat_pipeline` is a English model originally trained by SamaSedaghat. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_wnut_model_samasedaghat_pipeline_en_5.5.0_3.0_1725739318598.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_wnut_model_samasedaghat_pipeline_en_5.5.0_3.0_1725739318598.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_wnut_model_samasedaghat_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_wnut_model_samasedaghat_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_wnut_model_samasedaghat_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/SamaSedaghat/my_wnut_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-cat_ner_xlmr_2_en.md b/docs/_posts/ahmedlone127/2024-09-07-cat_ner_xlmr_2_en.md new file mode 100644 index 00000000000000..bce1953cb39ae4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-cat_ner_xlmr_2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English cat_ner_xlmr_2 XlmRoBertaForTokenClassification from homersimpson +author: John Snow Labs +name: cat_ner_xlmr_2 +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cat_ner_xlmr_2` is a English model originally trained by homersimpson. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cat_ner_xlmr_2_en_5.5.0_3.0_1725705443414.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cat_ner_xlmr_2_en_5.5.0_3.0_1725705443414.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("cat_ner_xlmr_2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("cat_ner_xlmr_2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cat_ner_xlmr_2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|814.3 MB| + +## References + +https://huggingface.co/homersimpson/cat-ner-xlmr-2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-cat_sayula_popoluca_spanish_2_en.md b/docs/_posts/ahmedlone127/2024-09-07-cat_sayula_popoluca_spanish_2_en.md new file mode 100644 index 00000000000000..209e675d3885f0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-cat_sayula_popoluca_spanish_2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English cat_sayula_popoluca_spanish_2 RoBertaForTokenClassification from homersimpson +author: John Snow Labs +name: cat_sayula_popoluca_spanish_2 +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cat_sayula_popoluca_spanish_2` is a English model originally trained by homersimpson. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cat_sayula_popoluca_spanish_2_en_5.5.0_3.0_1725724105103.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cat_sayula_popoluca_spanish_2_en_5.5.0_3.0_1725724105103.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = RoBertaForTokenClassification.pretrained("cat_sayula_popoluca_spanish_2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = RoBertaForTokenClassification.pretrained("cat_sayula_popoluca_spanish_2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cat_sayula_popoluca_spanish_2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|462.3 MB| + +## References + +https://huggingface.co/homersimpson/cat-pos-es-2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-cat_sayula_popoluca_spanish_2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-cat_sayula_popoluca_spanish_2_pipeline_en.md new file mode 100644 index 00000000000000..194c6e6b78843f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-cat_sayula_popoluca_spanish_2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English cat_sayula_popoluca_spanish_2_pipeline pipeline RoBertaForTokenClassification from homersimpson +author: John Snow Labs +name: cat_sayula_popoluca_spanish_2_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cat_sayula_popoluca_spanish_2_pipeline` is a English model originally trained by homersimpson. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cat_sayula_popoluca_spanish_2_pipeline_en_5.5.0_3.0_1725724127105.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cat_sayula_popoluca_spanish_2_pipeline_en_5.5.0_3.0_1725724127105.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("cat_sayula_popoluca_spanish_2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("cat_sayula_popoluca_spanish_2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cat_sayula_popoluca_spanish_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|462.3 MB| + +## References + +https://huggingface.co/homersimpson/cat-pos-es-2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-codebert_small_v2_en.md b/docs/_posts/ahmedlone127/2024-09-07-codebert_small_v2_en.md new file mode 100644 index 00000000000000..ff55eaf1c9a70e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-codebert_small_v2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English codebert_small_v2 RoBertaEmbeddings from codistai +author: John Snow Labs +name: codebert_small_v2 +date: 2024-09-07 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`codebert_small_v2` is a English model originally trained by codistai. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/codebert_small_v2_en_5.5.0_3.0_1725673142610.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/codebert_small_v2_en_5.5.0_3.0_1725673142610.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("codebert_small_v2","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("codebert_small_v2","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|codebert_small_v2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|705.2 MB| + +## References + +https://huggingface.co/codistai/codeBERT-small-v2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-ct_kld_xlmr_idkmrc_1_en.md b/docs/_posts/ahmedlone127/2024-09-07-ct_kld_xlmr_idkmrc_1_en.md new file mode 100644 index 00000000000000..ac0e36923b0b11 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-ct_kld_xlmr_idkmrc_1_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English ct_kld_xlmr_idkmrc_1 XlmRoBertaForQuestionAnswering from intanm +author: John Snow Labs +name: ct_kld_xlmr_idkmrc_1 +date: 2024-09-07 +tags: [en, open_source, onnx, question_answering, xlm_roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ct_kld_xlmr_idkmrc_1` is a English model originally trained by intanm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ct_kld_xlmr_idkmrc_1_en_5.5.0_3.0_1725685634895.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ct_kld_xlmr_idkmrc_1_en_5.5.0_3.0_1725685634895.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = XlmRoBertaForQuestionAnswering.pretrained("ct_kld_xlmr_idkmrc_1","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = XlmRoBertaForQuestionAnswering.pretrained("ct_kld_xlmr_idkmrc_1", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ct_kld_xlmr_idkmrc_1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|834.0 MB| + +## References + +https://huggingface.co/intanm/ct-kld-xlmr-idkmrc-1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-ct_kld_xlmr_idkmrc_1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-ct_kld_xlmr_idkmrc_1_pipeline_en.md new file mode 100644 index 00000000000000..0df1ce74263ddc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-ct_kld_xlmr_idkmrc_1_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English ct_kld_xlmr_idkmrc_1_pipeline pipeline XlmRoBertaForQuestionAnswering from intanm +author: John Snow Labs +name: ct_kld_xlmr_idkmrc_1_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ct_kld_xlmr_idkmrc_1_pipeline` is a English model originally trained by intanm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ct_kld_xlmr_idkmrc_1_pipeline_en_5.5.0_3.0_1725685743846.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ct_kld_xlmr_idkmrc_1_pipeline_en_5.5.0_3.0_1725685743846.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("ct_kld_xlmr_idkmrc_1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("ct_kld_xlmr_idkmrc_1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ct_kld_xlmr_idkmrc_1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|834.0 MB| + +## References + +https://huggingface.co/intanm/ct-kld-xlmr-idkmrc-1 + +## Included Models + +- MultiDocumentAssembler +- XlmRoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-cuad_distil_governing_law_08_25_v1_en.md b/docs/_posts/ahmedlone127/2024-09-07-cuad_distil_governing_law_08_25_v1_en.md new file mode 100644 index 00000000000000..a634c976065a96 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-cuad_distil_governing_law_08_25_v1_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English cuad_distil_governing_law_08_25_v1 DistilBertForQuestionAnswering from saraks +author: John Snow Labs +name: cuad_distil_governing_law_08_25_v1 +date: 2024-09-07 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cuad_distil_governing_law_08_25_v1` is a English model originally trained by saraks. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cuad_distil_governing_law_08_25_v1_en_5.5.0_3.0_1725722842059.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cuad_distil_governing_law_08_25_v1_en_5.5.0_3.0_1725722842059.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("cuad_distil_governing_law_08_25_v1","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("cuad_distil_governing_law_08_25_v1", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cuad_distil_governing_law_08_25_v1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/saraks/cuad-distil-governing_law-08-25-v1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_detected_jailbreak_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_detected_jailbreak_en.md new file mode 100644 index 00000000000000..5b3cbc1d6e3d25 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_detected_jailbreak_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_detected_jailbreak DistilBertForSequenceClassification from Necent +author: John Snow Labs +name: distilbert_base_uncased_detected_jailbreak +date: 2024-09-07 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_detected_jailbreak` is a English model originally trained by Necent. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_detected_jailbreak_en_5.5.0_3.0_1725674621248.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_detected_jailbreak_en_5.5.0_3.0_1725674621248.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_base_uncased_detected_jailbreak","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_base_uncased_detected_jailbreak", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_detected_jailbreak| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/Necent/distilbert-base-uncased-detected-jailbreak \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_emotion_jeremygf_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_emotion_jeremygf_en.md new file mode 100644 index 00000000000000..f0fb301a6b15f5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_emotion_jeremygf_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_emotion_jeremygf DistilBertForSequenceClassification from jeremygf +author: John Snow Labs +name: distilbert_base_uncased_finetuned_emotion_jeremygf +date: 2024-09-07 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_emotion_jeremygf` is a English model originally trained by jeremygf. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_jeremygf_en_5.5.0_3.0_1725674643485.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_jeremygf_en_5.5.0_3.0_1725674643485.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_base_uncased_finetuned_emotion_jeremygf","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_base_uncased_finetuned_emotion_jeremygf", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_emotion_jeremygf| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/jeremygf/distilbert-base-uncased-finetuned-emotion \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_emotion_sunwoongee_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_emotion_sunwoongee_en.md new file mode 100644 index 00000000000000..9c55a49ba6e3db --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_emotion_sunwoongee_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_emotion_sunwoongee DistilBertForSequenceClassification from sunwoongee +author: John Snow Labs +name: distilbert_base_uncased_finetuned_emotion_sunwoongee +date: 2024-09-07 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_emotion_sunwoongee` is a English model originally trained by sunwoongee. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_sunwoongee_en_5.5.0_3.0_1725674656195.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_sunwoongee_en_5.5.0_3.0_1725674656195.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_base_uncased_finetuned_emotion_sunwoongee","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_base_uncased_finetuned_emotion_sunwoongee", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_emotion_sunwoongee| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/sunwoongee/distilbert-base-uncased-finetuned-emotion \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_emotion_sunwoongee_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_emotion_sunwoongee_pipeline_en.md new file mode 100644 index 00000000000000..7606bcfed887d9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_emotion_sunwoongee_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_emotion_sunwoongee_pipeline pipeline DistilBertForSequenceClassification from sunwoongee +author: John Snow Labs +name: distilbert_base_uncased_finetuned_emotion_sunwoongee_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_emotion_sunwoongee_pipeline` is a English model originally trained by sunwoongee. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_sunwoongee_pipeline_en_5.5.0_3.0_1725674670236.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_sunwoongee_pipeline_en_5.5.0_3.0_1725674670236.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_emotion_sunwoongee_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_emotion_sunwoongee_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_emotion_sunwoongee_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/sunwoongee/distilbert-base-uncased-finetuned-emotion + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_ner_dev4952_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_ner_dev4952_en.md new file mode 100644 index 00000000000000..da117aa18612bf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_ner_dev4952_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_ner_dev4952 DistilBertForTokenClassification from dev4952 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_ner_dev4952 +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_ner_dev4952` is a English model originally trained by dev4952. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_dev4952_en_5.5.0_3.0_1725739568620.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_dev4952_en_5.5.0_3.0_1725739568620.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_dev4952","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_dev4952", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_ner_dev4952| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/dev4952/distilbert-base-uncased-finetuned-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_ner_owenk1212_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_ner_owenk1212_pipeline_en.md new file mode 100644 index 00000000000000..723466d42b8662 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_ner_owenk1212_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_ner_owenk1212_pipeline pipeline DistilBertForTokenClassification from OwenK1212 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_ner_owenk1212_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_ner_owenk1212_pipeline` is a English model originally trained by OwenK1212. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_owenk1212_pipeline_en_5.5.0_3.0_1725739313702.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_owenk1212_pipeline_en_5.5.0_3.0_1725739313702.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_ner_owenk1212_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_ner_owenk1212_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_ner_owenk1212_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|248.1 MB| + +## References + +https://huggingface.co/OwenK1212/distilbert-base-uncased-finetuned-ner + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_ner_seanlee7_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_ner_seanlee7_pipeline_en.md new file mode 100644 index 00000000000000..2699ac880a10e3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_ner_seanlee7_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_ner_seanlee7_pipeline pipeline DistilBertForTokenClassification from SeanLee7 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_ner_seanlee7_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_ner_seanlee7_pipeline` is a English model originally trained by SeanLee7. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_seanlee7_pipeline_en_5.5.0_3.0_1725729971439.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_seanlee7_pipeline_en_5.5.0_3.0_1725729971439.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_ner_seanlee7_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_ner_seanlee7_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_ner_seanlee7_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/SeanLee7/distilbert-base-uncased-finetuned-ner + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_ner_tuanbc_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_ner_tuanbc_en.md new file mode 100644 index 00000000000000..9a5c76ad904b7e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_ner_tuanbc_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_ner_tuanbc DistilBertForTokenClassification from TuanBC +author: John Snow Labs +name: distilbert_base_uncased_finetuned_ner_tuanbc +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_ner_tuanbc` is a English model originally trained by TuanBC. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_tuanbc_en_5.5.0_3.0_1725734334942.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_tuanbc_en_5.5.0_3.0_1725734334942.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_tuanbc","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_tuanbc", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_ner_tuanbc| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/TuanBC/distilbert-base-uncased-finetuned-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_sayula_popoluca_2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_sayula_popoluca_2_pipeline_en.md new file mode 100644 index 00000000000000..0cb05ac149eb3b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_sayula_popoluca_2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_sayula_popoluca_2_pipeline pipeline DistilBertForTokenClassification from Justice0893 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_sayula_popoluca_2_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_sayula_popoluca_2_pipeline` is a English model originally trained by Justice0893. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_sayula_popoluca_2_pipeline_en_5.5.0_3.0_1725729803516.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_sayula_popoluca_2_pipeline_en_5.5.0_3.0_1725729803516.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_sayula_popoluca_2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_sayula_popoluca_2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_sayula_popoluca_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.4 MB| + +## References + +https://huggingface.co/Justice0893/distilbert-base-uncased-finetuned-pos-2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_squad_d5716d28_jwlovetea_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_squad_d5716d28_jwlovetea_pipeline_en.md new file mode 100644 index 00000000000000..c651c5109e07cd --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_squad_d5716d28_jwlovetea_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_d5716d28_jwlovetea_pipeline pipeline DistilBertForQuestionAnswering from jwlovetea +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_d5716d28_jwlovetea_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_d5716d28_jwlovetea_pipeline` is a English model originally trained by jwlovetea. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_d5716d28_jwlovetea_pipeline_en_5.5.0_3.0_1725695413495.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_d5716d28_jwlovetea_pipeline_en_5.5.0_3.0_1725695413495.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_squad_d5716d28_jwlovetea_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_squad_d5716d28_jwlovetea_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_d5716d28_jwlovetea_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/jwlovetea/distilbert-base-uncased-finetuned-squad-d5716d28 + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_squad_d5716d28_linqus_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_squad_d5716d28_linqus_en.md new file mode 100644 index 00000000000000..aeb0dbcd381ca0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_squad_d5716d28_linqus_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_d5716d28_linqus DistilBertForQuestionAnswering from linqus +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_d5716d28_linqus +date: 2024-09-07 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_d5716d28_linqus` is a English model originally trained by linqus. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_d5716d28_linqus_en_5.5.0_3.0_1725722307583.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_d5716d28_linqus_en_5.5.0_3.0_1725722307583.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_d5716d28_linqus","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_d5716d28_linqus", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_d5716d28_linqus| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/linqus/distilbert-base-uncased-finetuned-squad-d5716d28 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_squad_d5716d28_osanseviero_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_squad_d5716d28_osanseviero_pipeline_en.md new file mode 100644 index 00000000000000..efb986f4aac2be --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_squad_d5716d28_osanseviero_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_d5716d28_osanseviero_pipeline pipeline DistilBertForQuestionAnswering from osanseviero +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_d5716d28_osanseviero_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_d5716d28_osanseviero_pipeline` is a English model originally trained by osanseviero. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_d5716d28_osanseviero_pipeline_en_5.5.0_3.0_1725695209799.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_d5716d28_osanseviero_pipeline_en_5.5.0_3.0_1725695209799.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_squad_d5716d28_osanseviero_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_squad_d5716d28_osanseviero_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_d5716d28_osanseviero_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/osanseviero/distilbert-base-uncased-finetuned-squad-d5716d28 + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_squad_meline_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_squad_meline_en.md new file mode 100644 index 00000000000000..121083cbf3e0c8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_squad_meline_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_meline DistilBertForQuestionAnswering from Meline +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_meline +date: 2024-09-07 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_meline` is a English model originally trained by Meline. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_meline_en_5.5.0_3.0_1725746260001.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_meline_en_5.5.0_3.0_1725746260001.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_meline","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_meline", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_meline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.4 MB| + +## References + +https://huggingface.co/Meline/distilbert-base-uncased-finetuned-squad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_tags_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_tags_en.md new file mode 100644 index 00000000000000..4ed1925a0c3ff2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_finetuned_tags_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_tags DistilBertForTokenClassification from anniellin +author: John Snow Labs +name: distilbert_base_uncased_finetuned_tags +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_tags` is a English model originally trained by anniellin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_tags_en_5.5.0_3.0_1725729990670.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_tags_en_5.5.0_3.0_1725729990670.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_tags","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_tags", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_tags| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/anniellin/distilbert-base-uncased-finetuned-tags \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_freezed_finetuned_sayula_popoluca_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_freezed_finetuned_sayula_popoluca_pipeline_en.md new file mode 100644 index 00000000000000..2ce6fbc69b862b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_freezed_finetuned_sayula_popoluca_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_freezed_finetuned_sayula_popoluca_pipeline pipeline DistilBertForTokenClassification from Prince6 +author: John Snow Labs +name: distilbert_base_uncased_freezed_finetuned_sayula_popoluca_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_freezed_finetuned_sayula_popoluca_pipeline` is a English model originally trained by Prince6. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_freezed_finetuned_sayula_popoluca_pipeline_en_5.5.0_3.0_1725730670983.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_freezed_finetuned_sayula_popoluca_pipeline_en_5.5.0_3.0_1725730670983.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_freezed_finetuned_sayula_popoluca_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_freezed_finetuned_sayula_popoluca_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_freezed_finetuned_sayula_popoluca_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/Prince6/distilbert-base-uncased-freezed_finetuned-pos + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_squad2_p10_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_squad2_p10_pipeline_en.md new file mode 100644 index 00000000000000..5c93c31d0b473f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbert_base_uncased_squad2_p10_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_base_uncased_squad2_p10_pipeline pipeline DistilBertForQuestionAnswering from pminha +author: John Snow Labs +name: distilbert_base_uncased_squad2_p10_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_squad2_p10_pipeline` is a English model originally trained by pminha. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_squad2_p10_pipeline_en_5.5.0_3.0_1725746006899.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_squad2_p10_pipeline_en_5.5.0_3.0_1725746006899.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_squad2_p10_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_squad2_p10_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_squad2_p10_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|237.6 MB| + +## References + +https://huggingface.co/pminha/distilbert-base-uncased-squad2-p10 + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-distilbertfull_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-distilbertfull_pipeline_en.md new file mode 100644 index 00000000000000..436f2c85c676e0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-distilbertfull_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbertfull_pipeline pipeline DistilBertForQuestionAnswering from adamfendri +author: John Snow Labs +name: distilbertfull_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbertfull_pipeline` is a English model originally trained by adamfendri. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbertfull_pipeline_en_5.5.0_3.0_1725735905253.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbertfull_pipeline_en_5.5.0_3.0_1725735905253.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbertfull_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbertfull_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbertfull_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|243.8 MB| + +## References + +https://huggingface.co/adamfendri/distilBertFull + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-dummy_model_arnmig_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-dummy_model_arnmig_pipeline_en.md new file mode 100644 index 00000000000000..c7316dfc702b49 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-dummy_model_arnmig_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_arnmig_pipeline pipeline CamemBertEmbeddings from arnmig +author: John Snow Labs +name: dummy_model_arnmig_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_arnmig_pipeline` is a English model originally trained by arnmig. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_arnmig_pipeline_en_5.5.0_3.0_1725728438238.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_arnmig_pipeline_en_5.5.0_3.0_1725728438238.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_arnmig_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_arnmig_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_arnmig_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/arnmig/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-dummy_model_rudytzhan_en.md b/docs/_posts/ahmedlone127/2024-09-07-dummy_model_rudytzhan_en.md new file mode 100644 index 00000000000000..cd94b9701417cb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-dummy_model_rudytzhan_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_rudytzhan CamemBertEmbeddings from rudyTzhan +author: John Snow Labs +name: dummy_model_rudytzhan +date: 2024-09-07 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_rudytzhan` is a English model originally trained by rudyTzhan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_rudytzhan_en_5.5.0_3.0_1725729289894.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_rudytzhan_en_5.5.0_3.0_1725729289894.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_rudytzhan","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_rudytzhan","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_rudytzhan| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/rudyTzhan/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-dummy_model_sayaendo_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-dummy_model_sayaendo_pipeline_en.md new file mode 100644 index 00000000000000..e83dcda5b1c815 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-dummy_model_sayaendo_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_sayaendo_pipeline pipeline CamemBertEmbeddings from SayaEndo +author: John Snow Labs +name: dummy_model_sayaendo_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_sayaendo_pipeline` is a English model originally trained by SayaEndo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_sayaendo_pipeline_en_5.5.0_3.0_1725728275676.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_sayaendo_pipeline_en_5.5.0_3.0_1725728275676.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_sayaendo_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_sayaendo_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_sayaendo_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/SayaEndo/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-dummy_model_sidd_2203_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-dummy_model_sidd_2203_pipeline_en.md new file mode 100644 index 00000000000000..53470b5874ee06 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-dummy_model_sidd_2203_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_sidd_2203_pipeline pipeline CamemBertEmbeddings from sidd-2203 +author: John Snow Labs +name: dummy_model_sidd_2203_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_sidd_2203_pipeline` is a English model originally trained by sidd-2203. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_sidd_2203_pipeline_en_5.5.0_3.0_1725729263218.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_sidd_2203_pipeline_en_5.5.0_3.0_1725729263218.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_sidd_2203_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_sidd_2203_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_sidd_2203_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/sidd-2203/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-dzoqa_malayalam_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-dzoqa_malayalam_pipeline_en.md new file mode 100644 index 00000000000000..4f3c69b651a6d6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-dzoqa_malayalam_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English dzoqa_malayalam_pipeline pipeline DistilBertForQuestionAnswering from Norphel +author: John Snow Labs +name: dzoqa_malayalam_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dzoqa_malayalam_pipeline` is a English model originally trained by Norphel. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dzoqa_malayalam_pipeline_en_5.5.0_3.0_1725695702908.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dzoqa_malayalam_pipeline_en_5.5.0_3.0_1725695702908.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dzoqa_malayalam_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dzoqa_malayalam_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dzoqa_malayalam_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/Norphel/dzoQA_ml + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-entity_rec_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-entity_rec_pipeline_en.md new file mode 100644 index 00000000000000..ad3176e492fbd5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-entity_rec_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English entity_rec_pipeline pipeline DistilBertForTokenClassification from cleopatro +author: John Snow Labs +name: entity_rec_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`entity_rec_pipeline` is a English model originally trained by cleopatro. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/entity_rec_pipeline_en_5.5.0_3.0_1725734128065.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/entity_rec_pipeline_en_5.5.0_3.0_1725734128065.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("entity_rec_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("entity_rec_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|entity_rec_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/cleopatro/Entity_Rec + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-fabner_ner_en.md b/docs/_posts/ahmedlone127/2024-09-07-fabner_ner_en.md new file mode 100644 index 00000000000000..e39fcda4040882 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-fabner_ner_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English fabner_ner DistilBertForTokenClassification from kalexa2 +author: John Snow Labs +name: fabner_ner +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fabner_ner` is a English model originally trained by kalexa2. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fabner_ner_en_5.5.0_3.0_1725731120538.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fabner_ner_en_5.5.0_3.0_1725731120538.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("fabner_ner","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("fabner_ner", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fabner_ner| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|243.9 MB| + +## References + +https://huggingface.co/kalexa2/fabner-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-fine_tune_llm_en.md b/docs/_posts/ahmedlone127/2024-09-07-fine_tune_llm_en.md new file mode 100644 index 00000000000000..27004be8136250 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-fine_tune_llm_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English fine_tune_llm DistilBertForQuestionAnswering from SaiSaketh +author: John Snow Labs +name: fine_tune_llm +date: 2024-09-07 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fine_tune_llm` is a English model originally trained by SaiSaketh. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fine_tune_llm_en_5.5.0_3.0_1725736415939.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fine_tune_llm_en_5.5.0_3.0_1725736415939.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("fine_tune_llm","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("fine_tune_llm", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fine_tune_llm| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/SaiSaketh/fine_tune_llm \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-fine_tuned_roberta_base_ner_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-fine_tuned_roberta_base_ner_pipeline_en.md new file mode 100644 index 00000000000000..bdf693ae3dd142 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-fine_tuned_roberta_base_ner_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English fine_tuned_roberta_base_ner_pipeline pipeline RoBertaForTokenClassification from elshehawy +author: John Snow Labs +name: fine_tuned_roberta_base_ner_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fine_tuned_roberta_base_ner_pipeline` is a English model originally trained by elshehawy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fine_tuned_roberta_base_ner_pipeline_en_5.5.0_3.0_1725720982199.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fine_tuned_roberta_base_ner_pipeline_en_5.5.0_3.0_1725720982199.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("fine_tuned_roberta_base_ner_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("fine_tuned_roberta_base_ner_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fine_tuned_roberta_base_ner_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|427.6 MB| + +## References + +https://huggingface.co/elshehawy/fine-tuned-roberta-base-ner + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-finetuned2_ldiego73_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-finetuned2_ldiego73_pipeline_en.md new file mode 100644 index 00000000000000..c32a2d0726fcb7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-finetuned2_ldiego73_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English finetuned2_ldiego73_pipeline pipeline DistilBertForQuestionAnswering from ldiego73 +author: John Snow Labs +name: finetuned2_ldiego73_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuned2_ldiego73_pipeline` is a English model originally trained by ldiego73. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuned2_ldiego73_pipeline_en_5.5.0_3.0_1725727373131.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuned2_ldiego73_pipeline_en_5.5.0_3.0_1725727373131.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finetuned2_ldiego73_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finetuned2_ldiego73_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuned2_ldiego73_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/ldiego73/finetuned2 + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-finetuned_token_2e_05_all_16_02_2022_15_50_54_en.md b/docs/_posts/ahmedlone127/2024-09-07-finetuned_token_2e_05_all_16_02_2022_15_50_54_en.md new file mode 100644 index 00000000000000..52f14e5bf2ca1f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-finetuned_token_2e_05_all_16_02_2022_15_50_54_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English finetuned_token_2e_05_all_16_02_2022_15_50_54 DistilBertForTokenClassification from ali2066 +author: John Snow Labs +name: finetuned_token_2e_05_all_16_02_2022_15_50_54 +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuned_token_2e_05_all_16_02_2022_15_50_54` is a English model originally trained by ali2066. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuned_token_2e_05_all_16_02_2022_15_50_54_en_5.5.0_3.0_1725730393220.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuned_token_2e_05_all_16_02_2022_15_50_54_en_5.5.0_3.0_1725730393220.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("finetuned_token_2e_05_all_16_02_2022_15_50_54","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("finetuned_token_2e_05_all_16_02_2022_15_50_54", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuned_token_2e_05_all_16_02_2022_15_50_54| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/ali2066/finetuned_token_2e-05_all_16_02_2022-15_50_54 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-finetuned_token_2e_05_all_16_02_2022_15_50_54_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-finetuned_token_2e_05_all_16_02_2022_15_50_54_pipeline_en.md new file mode 100644 index 00000000000000..f332095b0bd26a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-finetuned_token_2e_05_all_16_02_2022_15_50_54_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English finetuned_token_2e_05_all_16_02_2022_15_50_54_pipeline pipeline DistilBertForTokenClassification from ali2066 +author: John Snow Labs +name: finetuned_token_2e_05_all_16_02_2022_15_50_54_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuned_token_2e_05_all_16_02_2022_15_50_54_pipeline` is a English model originally trained by ali2066. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuned_token_2e_05_all_16_02_2022_15_50_54_pipeline_en_5.5.0_3.0_1725730404382.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuned_token_2e_05_all_16_02_2022_15_50_54_pipeline_en_5.5.0_3.0_1725730404382.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finetuned_token_2e_05_all_16_02_2022_15_50_54_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finetuned_token_2e_05_all_16_02_2022_15_50_54_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuned_token_2e_05_all_16_02_2022_15_50_54_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/ali2066/finetuned_token_2e-05_all_16_02_2022-15_50_54 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-finsentencebert_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-finsentencebert_pipeline_en.md new file mode 100644 index 00000000000000..9cc5febea7ef3a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-finsentencebert_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English finsentencebert_pipeline pipeline MPNetEmbeddings from syang687 +author: John Snow Labs +name: finsentencebert_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finsentencebert_pipeline` is a English model originally trained by syang687. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finsentencebert_pipeline_en_5.5.0_3.0_1725703030002.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finsentencebert_pipeline_en_5.5.0_3.0_1725703030002.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finsentencebert_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finsentencebert_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finsentencebert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|405.6 MB| + +## References + +https://huggingface.co/syang687/FinSentenceBERT + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-fresh_model_uncased_en.md b/docs/_posts/ahmedlone127/2024-09-07-fresh_model_uncased_en.md new file mode 100644 index 00000000000000..7a28a8473597ea --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-fresh_model_uncased_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English fresh_model_uncased DistilBertForTokenClassification from Gkumi +author: John Snow Labs +name: fresh_model_uncased +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fresh_model_uncased` is a English model originally trained by Gkumi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fresh_model_uncased_en_5.5.0_3.0_1725739223329.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fresh_model_uncased_en_5.5.0_3.0_1725739223329.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("fresh_model_uncased","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("fresh_model_uncased", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fresh_model_uncased| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/Gkumi/fresh-model-uncased \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-generative_qas_pariwisata_bali_en.md b/docs/_posts/ahmedlone127/2024-09-07-generative_qas_pariwisata_bali_en.md new file mode 100644 index 00000000000000..c30b12e83894b3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-generative_qas_pariwisata_bali_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English generative_qas_pariwisata_bali MPNetEmbeddings from SwastyMaharani +author: John Snow Labs +name: generative_qas_pariwisata_bali +date: 2024-09-07 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`generative_qas_pariwisata_bali` is a English model originally trained by SwastyMaharani. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/generative_qas_pariwisata_bali_en_5.5.0_3.0_1725703164670.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/generative_qas_pariwisata_bali_en_5.5.0_3.0_1725703164670.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("generative_qas_pariwisata_bali","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("generative_qas_pariwisata_bali","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|generative_qas_pariwisata_bali| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.6 MB| + +## References + +https://huggingface.co/SwastyMaharani/generative-qas-pariwisata-bali \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-german_french_translation_model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-german_french_translation_model_pipeline_en.md new file mode 100644 index 00000000000000..8a0b221d3a852c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-german_french_translation_model_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English german_french_translation_model_pipeline pipeline MarianTransformer from SalomonMetre13 +author: John Snow Labs +name: german_french_translation_model_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`german_french_translation_model_pipeline` is a English model originally trained by SalomonMetre13. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/german_french_translation_model_pipeline_en_5.5.0_3.0_1725746818813.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/german_french_translation_model_pipeline_en_5.5.0_3.0_1725746818813.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("german_french_translation_model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("german_french_translation_model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|german_french_translation_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|518.2 MB| + +## References + +https://huggingface.co/SalomonMetre13/de_fr_translation_model + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-julibert_pipeline_ca.md b/docs/_posts/ahmedlone127/2024-09-07-julibert_pipeline_ca.md new file mode 100644 index 00000000000000..f45661fd7b9dc4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-julibert_pipeline_ca.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Catalan, Valencian julibert_pipeline pipeline RoBertaEmbeddings from softcatala +author: John Snow Labs +name: julibert_pipeline +date: 2024-09-07 +tags: [ca, open_source, pipeline, onnx] +task: Embeddings +language: ca +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`julibert_pipeline` is a Catalan, Valencian model originally trained by softcatala. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/julibert_pipeline_ca_5.5.0_3.0_1725678554029.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/julibert_pipeline_ca_5.5.0_3.0_1725678554029.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("julibert_pipeline", lang = "ca") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("julibert_pipeline", lang = "ca") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|julibert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|ca| +|Size:|465.8 MB| + +## References + +https://huggingface.co/softcatala/julibert + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-kaggle_competition_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-kaggle_competition_pipeline_en.md new file mode 100644 index 00000000000000..8fd3b0d16eb923 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-kaggle_competition_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English kaggle_competition_pipeline pipeline DistilBertForSequenceClassification from picaba +author: John Snow Labs +name: kaggle_competition_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`kaggle_competition_pipeline` is a English model originally trained by picaba. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/kaggle_competition_pipeline_en_5.5.0_3.0_1725674485738.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/kaggle_competition_pipeline_en_5.5.0_3.0_1725674485738.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("kaggle_competition_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("kaggle_competition_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|kaggle_competition_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/picaba/Kaggle_competition + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-lct_ner_en.md b/docs/_posts/ahmedlone127/2024-09-07-lct_ner_en.md new file mode 100644 index 00000000000000..72737f648a01cf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-lct_ner_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English lct_ner DistilBertForTokenClassification from IshikiI-01 +author: John Snow Labs +name: lct_ner +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lct_ner` is a English model originally trained by IshikiI-01. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lct_ner_en_5.5.0_3.0_1725730243345.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lct_ner_en_5.5.0_3.0_1725730243345.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("lct_ner","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("lct_ner", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lct_ner| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|666.6 MB| + +## References + +https://huggingface.co/IshikiI-01/lct_ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-malurl_roberta_10e_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-malurl_roberta_10e_pipeline_en.md new file mode 100644 index 00000000000000..69f90a68d83466 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-malurl_roberta_10e_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English malurl_roberta_10e_pipeline pipeline RoBertaForSequenceClassification from bgspaditya +author: John Snow Labs +name: malurl_roberta_10e_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`malurl_roberta_10e_pipeline` is a English model originally trained by bgspaditya. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/malurl_roberta_10e_pipeline_en_5.5.0_3.0_1725679524190.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/malurl_roberta_10e_pipeline_en_5.5.0_3.0_1725679524190.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("malurl_roberta_10e_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("malurl_roberta_10e_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|malurl_roberta_10e_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|438.4 MB| + +## References + +https://huggingface.co/bgspaditya/malurl-roberta-10e + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-marian_finetuned_dyu_tonga_tonga_islands_french_en.md b/docs/_posts/ahmedlone127/2024-09-07-marian_finetuned_dyu_tonga_tonga_islands_french_en.md new file mode 100644 index 00000000000000..77786b7caba896 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-marian_finetuned_dyu_tonga_tonga_islands_french_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English marian_finetuned_dyu_tonga_tonga_islands_french MarianTransformer from isaacoluwafemiog +author: John Snow Labs +name: marian_finetuned_dyu_tonga_tonga_islands_french +date: 2024-09-07 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_dyu_tonga_tonga_islands_french` is a English model originally trained by isaacoluwafemiog. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_dyu_tonga_tonga_islands_french_en_5.5.0_3.0_1725746945377.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_dyu_tonga_tonga_islands_french_en_5.5.0_3.0_1725746945377.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("marian_finetuned_dyu_tonga_tonga_islands_french","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("marian_finetuned_dyu_tonga_tonga_islands_french","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_dyu_tonga_tonga_islands_french| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.4 MB| + +## References + +https://huggingface.co/isaacoluwafemiog/marian-finetuned-dyu-to-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-minilmv2_l6_h384_r_fineweb_100k_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-minilmv2_l6_h384_r_fineweb_100k_pipeline_en.md new file mode 100644 index 00000000000000..e9e67a98d1fd12 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-minilmv2_l6_h384_r_fineweb_100k_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English minilmv2_l6_h384_r_fineweb_100k_pipeline pipeline RoBertaEmbeddings from pszemraj +author: John Snow Labs +name: minilmv2_l6_h384_r_fineweb_100k_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`minilmv2_l6_h384_r_fineweb_100k_pipeline` is a English model originally trained by pszemraj. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/minilmv2_l6_h384_r_fineweb_100k_pipeline_en_5.5.0_3.0_1725716365932.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/minilmv2_l6_h384_r_fineweb_100k_pipeline_en_5.5.0_3.0_1725716365932.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("minilmv2_l6_h384_r_fineweb_100k_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("minilmv2_l6_h384_r_fineweb_100k_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|minilmv2_l6_h384_r_fineweb_100k_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|114.2 MB| + +## References + +https://huggingface.co/pszemraj/MiniLMv2-L6-H384_R-fineweb-100k + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-mix_vietnamese_english_4m_en.md b/docs/_posts/ahmedlone127/2024-09-07-mix_vietnamese_english_4m_en.md new file mode 100644 index 00000000000000..0f960d4d4c6301 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-mix_vietnamese_english_4m_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English mix_vietnamese_english_4m MarianTransformer from Eugenememe +author: John Snow Labs +name: mix_vietnamese_english_4m +date: 2024-09-07 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mix_vietnamese_english_4m` is a English model originally trained by Eugenememe. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mix_vietnamese_english_4m_en_5.5.0_3.0_1725746905581.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mix_vietnamese_english_4m_en_5.5.0_3.0_1725746905581.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("mix_vietnamese_english_4m","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("mix_vietnamese_english_4m","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mix_vietnamese_english_4m| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|475.2 MB| + +## References + +https://huggingface.co/Eugenememe/mix-vi-en-4m \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-model_perturbations_en.md b/docs/_posts/ahmedlone127/2024-09-07-model_perturbations_en.md new file mode 100644 index 00000000000000..329ef0ec3277fe --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-model_perturbations_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English model_perturbations DistilBertForTokenClassification from cria111 +author: John Snow Labs +name: model_perturbations +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`model_perturbations` is a English model originally trained by cria111. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/model_perturbations_en_5.5.0_3.0_1725730878611.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/model_perturbations_en_5.5.0_3.0_1725730878611.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("model_perturbations","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("model_perturbations", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|model_perturbations| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/cria111/model_perturbations \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-model_perturbations_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-model_perturbations_pipeline_en.md new file mode 100644 index 00000000000000..2f8235dfe7319f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-model_perturbations_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English model_perturbations_pipeline pipeline DistilBertForTokenClassification from cria111 +author: John Snow Labs +name: model_perturbations_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`model_perturbations_pipeline` is a English model originally trained by cria111. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/model_perturbations_pipeline_en_5.5.0_3.0_1725730889977.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/model_perturbations_pipeline_en_5.5.0_3.0_1725730889977.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("model_perturbations_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("model_perturbations_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|model_perturbations_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/cria111/model_perturbations + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-model_vrushali_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-model_vrushali_pipeline_en.md new file mode 100644 index 00000000000000..d8d18b8a293448 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-model_vrushali_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English model_vrushali_pipeline pipeline DistilBertForTokenClassification from Vrushali +author: John Snow Labs +name: model_vrushali_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`model_vrushali_pipeline` is a English model originally trained by Vrushali. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/model_vrushali_pipeline_en_5.5.0_3.0_1725739603604.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/model_vrushali_pipeline_en_5.5.0_3.0_1725739603604.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("model_vrushali_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("model_vrushali_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|model_vrushali_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.4 MB| + +## References + +https://huggingface.co/Vrushali/model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-ner_model_arshiakarimian_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-ner_model_arshiakarimian_pipeline_en.md new file mode 100644 index 00000000000000..a2a5b67c0d64f4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-ner_model_arshiakarimian_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English ner_model_arshiakarimian_pipeline pipeline DistilBertForTokenClassification from ArshiaKarimian +author: John Snow Labs +name: ner_model_arshiakarimian_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ner_model_arshiakarimian_pipeline` is a English model originally trained by ArshiaKarimian. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ner_model_arshiakarimian_pipeline_en_5.5.0_3.0_1725739481808.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ner_model_arshiakarimian_pipeline_en_5.5.0_3.0_1725739481808.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("ner_model_arshiakarimian_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("ner_model_arshiakarimian_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ner_model_arshiakarimian_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/ArshiaKarimian/NER_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-ner_replica_pipeline_tr.md b/docs/_posts/ahmedlone127/2024-09-07-ner_replica_pipeline_tr.md new file mode 100644 index 00000000000000..5cf64d3c2d4881 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-ner_replica_pipeline_tr.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Turkish ner_replica_pipeline pipeline BertForTokenClassification from merve +author: John Snow Labs +name: ner_replica_pipeline +date: 2024-09-07 +tags: [tr, open_source, pipeline, onnx] +task: Named Entity Recognition +language: tr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ner_replica_pipeline` is a Turkish model originally trained by merve. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ner_replica_pipeline_tr_5.5.0_3.0_1725701828454.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ner_replica_pipeline_tr_5.5.0_3.0_1725701828454.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("ner_replica_pipeline", lang = "tr") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("ner_replica_pipeline", lang = "tr") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ner_replica_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|tr| +|Size:|412.4 MB| + +## References + +https://huggingface.co/merve/ner-replica + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-nlp_til2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-nlp_til2_pipeline_en.md new file mode 100644 index 00000000000000..664c01cc512d61 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-nlp_til2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English nlp_til2_pipeline pipeline DistilBertForTokenClassification from casual +author: John Snow Labs +name: nlp_til2_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`nlp_til2_pipeline` is a English model originally trained by casual. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/nlp_til2_pipeline_en_5.5.0_3.0_1725734472893.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/nlp_til2_pipeline_en_5.5.0_3.0_1725734472893.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("nlp_til2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("nlp_til2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|nlp_til2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/casual/nlp_til2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-nreimers_minilmv2_l6_h384_distilled_from_roberta_large_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-nreimers_minilmv2_l6_h384_distilled_from_roberta_large_pipeline_en.md new file mode 100644 index 00000000000000..b3db82c9bbad32 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-nreimers_minilmv2_l6_h384_distilled_from_roberta_large_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English nreimers_minilmv2_l6_h384_distilled_from_roberta_large_pipeline pipeline RoBertaForTokenClassification from baileyk +author: John Snow Labs +name: nreimers_minilmv2_l6_h384_distilled_from_roberta_large_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`nreimers_minilmv2_l6_h384_distilled_from_roberta_large_pipeline` is a English model originally trained by baileyk. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/nreimers_minilmv2_l6_h384_distilled_from_roberta_large_pipeline_en_5.5.0_3.0_1725707516537.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/nreimers_minilmv2_l6_h384_distilled_from_roberta_large_pipeline_en_5.5.0_3.0_1725707516537.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("nreimers_minilmv2_l6_h384_distilled_from_roberta_large_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("nreimers_minilmv2_l6_h384_distilled_from_roberta_large_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|nreimers_minilmv2_l6_h384_distilled_from_roberta_large_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|111.5 MB| + +## References + +https://huggingface.co/baileyk/nreimers_MiniLMv2-L6-H384-distilled-from-RoBERTa-Large + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_likhith231_en.md b/docs/_posts/ahmedlone127/2024-09-07-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_likhith231_en.md new file mode 100644 index 00000000000000..d900b6a136ca63 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_likhith231_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_likhith231 MarianTransformer from likhith231 +author: John Snow Labs +name: opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_likhith231 +date: 2024-09-07 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_likhith231` is a English model originally trained by likhith231. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_likhith231_en_5.5.0_3.0_1725741074073.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_likhith231_en_5.5.0_3.0_1725741074073.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_likhith231","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_likhith231","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_likhith231| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.4 MB| + +## References + +https://huggingface.co/likhith231/opus-mt-en-ro-finetuned-en-to-ro \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-opus_maltese_english_swahili_finetuned_english_tonga_tonga_islands_swahili_zumaridi_en.md b/docs/_posts/ahmedlone127/2024-09-07-opus_maltese_english_swahili_finetuned_english_tonga_tonga_islands_swahili_zumaridi_en.md new file mode 100644 index 00000000000000..7929c02c0c5377 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-opus_maltese_english_swahili_finetuned_english_tonga_tonga_islands_swahili_zumaridi_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_english_swahili_finetuned_english_tonga_tonga_islands_swahili_zumaridi MarianTransformer from Zumaridi +author: John Snow Labs +name: opus_maltese_english_swahili_finetuned_english_tonga_tonga_islands_swahili_zumaridi +date: 2024-09-07 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_swahili_finetuned_english_tonga_tonga_islands_swahili_zumaridi` is a English model originally trained by Zumaridi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_swahili_finetuned_english_tonga_tonga_islands_swahili_zumaridi_en_5.5.0_3.0_1725740402263.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_swahili_finetuned_english_tonga_tonga_islands_swahili_zumaridi_en_5.5.0_3.0_1725740402263.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_english_swahili_finetuned_english_tonga_tonga_islands_swahili_zumaridi","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_english_swahili_finetuned_english_tonga_tonga_islands_swahili_zumaridi","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_swahili_finetuned_english_tonga_tonga_islands_swahili_zumaridi| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|506.5 MB| + +## References + +https://huggingface.co/Zumaridi/opus-mt-en-sw-finetuned-en-to-sw \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-opus_maltese_french_english_bds_en.md b/docs/_posts/ahmedlone127/2024-09-07-opus_maltese_french_english_bds_en.md new file mode 100644 index 00000000000000..18c9238f8d9cc3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-opus_maltese_french_english_bds_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_french_english_bds MarianTransformer from Anhptp +author: John Snow Labs +name: opus_maltese_french_english_bds +date: 2024-09-07 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_french_english_bds` is a English model originally trained by Anhptp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_french_english_bds_en_5.5.0_3.0_1725741158659.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_french_english_bds_en_5.5.0_3.0_1725741158659.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_french_english_bds","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_french_english_bds","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_french_english_bds| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|507.6 MB| + +## References + +https://huggingface.co/Anhptp/opus-mt-fr-en-BDS \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-opus_maltese_german_english_finetuned_german_tonga_tonga_islands_english_second_tiagohatta_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-opus_maltese_german_english_finetuned_german_tonga_tonga_islands_english_second_tiagohatta_pipeline_en.md new file mode 100644 index 00000000000000..dfd05ca7412ba6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-opus_maltese_german_english_finetuned_german_tonga_tonga_islands_english_second_tiagohatta_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_german_english_finetuned_german_tonga_tonga_islands_english_second_tiagohatta_pipeline pipeline MarianTransformer from tiagohatta +author: John Snow Labs +name: opus_maltese_german_english_finetuned_german_tonga_tonga_islands_english_second_tiagohatta_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_german_english_finetuned_german_tonga_tonga_islands_english_second_tiagohatta_pipeline` is a English model originally trained by tiagohatta. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_german_english_finetuned_german_tonga_tonga_islands_english_second_tiagohatta_pipeline_en_5.5.0_3.0_1725747480668.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_german_english_finetuned_german_tonga_tonga_islands_english_second_tiagohatta_pipeline_en_5.5.0_3.0_1725747480668.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_german_english_finetuned_german_tonga_tonga_islands_english_second_tiagohatta_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_german_english_finetuned_german_tonga_tonga_islands_english_second_tiagohatta_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_german_english_finetuned_german_tonga_tonga_islands_english_second_tiagohatta_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|499.9 MB| + +## References + +https://huggingface.co/tiagohatta/opus-mt-de-en-finetuned-de-to-en-second + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-opus_tatoeba_english_japanese_finetuned_eng_tonga_tonga_islands_jpn_hani_nan.md b/docs/_posts/ahmedlone127/2024-09-07-opus_tatoeba_english_japanese_finetuned_eng_tonga_tonga_islands_jpn_hani_nan.md new file mode 100644 index 00000000000000..15bf4b3343444f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-opus_tatoeba_english_japanese_finetuned_eng_tonga_tonga_islands_jpn_hani_nan.md @@ -0,0 +1,94 @@ +--- +layout: model +title: None opus_tatoeba_english_japanese_finetuned_eng_tonga_tonga_islands_jpn_hani MarianTransformer from julianty +author: John Snow Labs +name: opus_tatoeba_english_japanese_finetuned_eng_tonga_tonga_islands_jpn_hani +date: 2024-09-07 +tags: [nan, open_source, onnx, translation, marian] +task: Translation +language: nan +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_tatoeba_english_japanese_finetuned_eng_tonga_tonga_islands_jpn_hani` is a None model originally trained by julianty. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_tatoeba_english_japanese_finetuned_eng_tonga_tonga_islands_jpn_hani_nan_5.5.0_3.0_1725747303490.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_tatoeba_english_japanese_finetuned_eng_tonga_tonga_islands_jpn_hani_nan_5.5.0_3.0_1725747303490.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_tatoeba_english_japanese_finetuned_eng_tonga_tonga_islands_jpn_hani","nan") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_tatoeba_english_japanese_finetuned_eng_tonga_tonga_islands_jpn_hani","nan") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_tatoeba_english_japanese_finetuned_eng_tonga_tonga_islands_jpn_hani| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|nan| +|Size:|542.0 MB| + +## References + +https://huggingface.co/julianty/opus-tatoeba-en-ja-finetuned-eng-to-jpn_Hani \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-piidetection_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-piidetection_pipeline_en.md new file mode 100644 index 00000000000000..7c05302351970a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-piidetection_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English piidetection_pipeline pipeline DistilBertForTokenClassification from codeSlang +author: John Snow Labs +name: piidetection_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`piidetection_pipeline` is a English model originally trained by codeSlang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/piidetection_pipeline_en_5.5.0_3.0_1725730357876.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/piidetection_pipeline_en_5.5.0_3.0_1725730357876.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("piidetection_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("piidetection_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|piidetection_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/codeSlang/piidetection + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-piimasking_pytorch_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-piimasking_pytorch_pipeline_en.md new file mode 100644 index 00000000000000..8c00144017e9b0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-piimasking_pytorch_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English piimasking_pytorch_pipeline pipeline DistilBertForTokenClassification from shivangiss +author: John Snow Labs +name: piimasking_pytorch_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`piimasking_pytorch_pipeline` is a English model originally trained by shivangiss. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/piimasking_pytorch_pipeline_en_5.5.0_3.0_1725731121739.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/piimasking_pytorch_pipeline_en_5.5.0_3.0_1725731121739.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("piimasking_pytorch_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("piimasking_pytorch_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|piimasking_pytorch_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|505.8 MB| + +## References + +https://huggingface.co/shivangiss/piimasking_pytorch + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-platzi_distilroberta_base_mrpc_miguel_moroyoqui_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-platzi_distilroberta_base_mrpc_miguel_moroyoqui_pipeline_en.md new file mode 100644 index 00000000000000..3568aa1b9e58b5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-platzi_distilroberta_base_mrpc_miguel_moroyoqui_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English platzi_distilroberta_base_mrpc_miguel_moroyoqui_pipeline pipeline RoBertaForSequenceClassification from moroyoqui +author: John Snow Labs +name: platzi_distilroberta_base_mrpc_miguel_moroyoqui_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`platzi_distilroberta_base_mrpc_miguel_moroyoqui_pipeline` is a English model originally trained by moroyoqui. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/platzi_distilroberta_base_mrpc_miguel_moroyoqui_pipeline_en_5.5.0_3.0_1725718229064.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/platzi_distilroberta_base_mrpc_miguel_moroyoqui_pipeline_en_5.5.0_3.0_1725718229064.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("platzi_distilroberta_base_mrpc_miguel_moroyoqui_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("platzi_distilroberta_base_mrpc_miguel_moroyoqui_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|platzi_distilroberta_base_mrpc_miguel_moroyoqui_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|308.6 MB| + +## References + +https://huggingface.co/moroyoqui/platzi-distilroberta-base-mrpc-miguel-moroyoqui + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-queansmodel_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-queansmodel_pipeline_en.md new file mode 100644 index 00000000000000..89af7737187415 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-queansmodel_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English queansmodel_pipeline pipeline DistilBertForQuestionAnswering from KeiMura +author: John Snow Labs +name: queansmodel_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`queansmodel_pipeline` is a English model originally trained by KeiMura. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/queansmodel_pipeline_en_5.5.0_3.0_1725695639011.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/queansmodel_pipeline_en_5.5.0_3.0_1725695639011.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("queansmodel_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("queansmodel_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|queansmodel_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/KeiMura/QueAnsModel + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-results_elseif02_en.md b/docs/_posts/ahmedlone127/2024-09-07-results_elseif02_en.md new file mode 100644 index 00000000000000..cde2ef2c9b817a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-results_elseif02_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English results_elseif02 DistilBertForQuestionAnswering from Elseif02 +author: John Snow Labs +name: results_elseif02 +date: 2024-09-07 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`results_elseif02` is a English model originally trained by Elseif02. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/results_elseif02_en_5.5.0_3.0_1725722668785.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/results_elseif02_en_5.5.0_3.0_1725722668785.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("results_elseif02","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("results_elseif02", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|results_elseif02| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|505.4 MB| + +## References + +https://huggingface.co/Elseif02/results \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-roberta_base_epoch_41_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-roberta_base_epoch_41_pipeline_en.md new file mode 100644 index 00000000000000..7e54ae0ed97878 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-roberta_base_epoch_41_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_epoch_41_pipeline pipeline RoBertaEmbeddings from yanaiela +author: John Snow Labs +name: roberta_base_epoch_41_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_epoch_41_pipeline` is a English model originally trained by yanaiela. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_epoch_41_pipeline_en_5.5.0_3.0_1725678851719.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_epoch_41_pipeline_en_5.5.0_3.0_1725678851719.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_epoch_41_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_epoch_41_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_epoch_41_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|297.3 MB| + +## References + +https://huggingface.co/yanaiela/roberta-base-epoch_41 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-roberta_base_epoch_43_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-roberta_base_epoch_43_pipeline_en.md new file mode 100644 index 00000000000000..816209164378d4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-roberta_base_epoch_43_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_epoch_43_pipeline pipeline RoBertaEmbeddings from yanaiela +author: John Snow Labs +name: roberta_base_epoch_43_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_epoch_43_pipeline` is a English model originally trained by yanaiela. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_epoch_43_pipeline_en_5.5.0_3.0_1725673570339.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_epoch_43_pipeline_en_5.5.0_3.0_1725673570339.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_epoch_43_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_epoch_43_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_epoch_43_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|297.3 MB| + +## References + +https://huggingface.co/yanaiela/roberta-base-epoch_43 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-roberta_base_ner_demo_turshilt2_pipeline_mn.md b/docs/_posts/ahmedlone127/2024-09-07-roberta_base_ner_demo_turshilt2_pipeline_mn.md new file mode 100644 index 00000000000000..a17257e6733def --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-roberta_base_ner_demo_turshilt2_pipeline_mn.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Mongolian roberta_base_ner_demo_turshilt2_pipeline pipeline RoBertaForTokenClassification from sanchirjav +author: John Snow Labs +name: roberta_base_ner_demo_turshilt2_pipeline +date: 2024-09-07 +tags: [mn, open_source, pipeline, onnx] +task: Named Entity Recognition +language: mn +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_ner_demo_turshilt2_pipeline` is a Mongolian model originally trained by sanchirjav. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_ner_demo_turshilt2_pipeline_mn_5.5.0_3.0_1725708033763.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_ner_demo_turshilt2_pipeline_mn_5.5.0_3.0_1725708033763.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_ner_demo_turshilt2_pipeline", lang = "mn") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_ner_demo_turshilt2_pipeline", lang = "mn") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_ner_demo_turshilt2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|mn| +|Size:|465.7 MB| + +## References + +https://huggingface.co/sanchirjav/roberta-base-ner-demo-turshilt2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-roberta_conll_learning_rate1e4_en.md b/docs/_posts/ahmedlone127/2024-09-07-roberta_conll_learning_rate1e4_en.md new file mode 100644 index 00000000000000..b088d834cc15de --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-roberta_conll_learning_rate1e4_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_conll_learning_rate1e4 RoBertaForTokenClassification from ICT2214Team7 +author: John Snow Labs +name: roberta_conll_learning_rate1e4 +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_conll_learning_rate1e4` is a English model originally trained by ICT2214Team7. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_conll_learning_rate1e4_en_5.5.0_3.0_1725721069603.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_conll_learning_rate1e4_en_5.5.0_3.0_1725721069603.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = RoBertaForTokenClassification.pretrained("roberta_conll_learning_rate1e4","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = RoBertaForTokenClassification.pretrained("roberta_conll_learning_rate1e4", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_conll_learning_rate1e4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|306.6 MB| + +## References + +https://huggingface.co/ICT2214Team7/RoBERTa_conll_learning_rate1e4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-roberta_india_ner_trainer_en.md b/docs/_posts/ahmedlone127/2024-09-07-roberta_india_ner_trainer_en.md new file mode 100644 index 00000000000000..1a2b98bc11d33a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-roberta_india_ner_trainer_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_india_ner_trainer RoBertaForTokenClassification from iamfadi +author: John Snow Labs +name: roberta_india_ner_trainer +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_india_ner_trainer` is a English model originally trained by iamfadi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_india_ner_trainer_en_5.5.0_3.0_1725668580107.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_india_ner_trainer_en_5.5.0_3.0_1725668580107.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = RoBertaForTokenClassification.pretrained("roberta_india_ner_trainer","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = RoBertaForTokenClassification.pretrained("roberta_india_ner_trainer", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_india_ner_trainer| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|444.1 MB| + +## References + +https://huggingface.co/iamfadi/roberta_india_ner_trainer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-roberta_large_mrpc_two_stage_en.md b/docs/_posts/ahmedlone127/2024-09-07-roberta_large_mrpc_two_stage_en.md new file mode 100644 index 00000000000000..5e03bffcaffc00 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-roberta_large_mrpc_two_stage_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_large_mrpc_two_stage RoBertaEmbeddings from ji-xin +author: John Snow Labs +name: roberta_large_mrpc_two_stage +date: 2024-09-07 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_large_mrpc_two_stage` is a English model originally trained by ji-xin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_large_mrpc_two_stage_en_5.5.0_3.0_1725678065686.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_large_mrpc_two_stage_en_5.5.0_3.0_1725678065686.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("roberta_large_mrpc_two_stage","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("roberta_large_mrpc_two_stage","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_large_mrpc_two_stage| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/ji-xin/roberta_large-MRPC-two_stage \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-scoris_maltese_english_lithuanian_lt.md b/docs/_posts/ahmedlone127/2024-09-07-scoris_maltese_english_lithuanian_lt.md new file mode 100644 index 00000000000000..7b3b9844bacdcc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-scoris_maltese_english_lithuanian_lt.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Lithuanian scoris_maltese_english_lithuanian MarianTransformer from scoris +author: John Snow Labs +name: scoris_maltese_english_lithuanian +date: 2024-09-07 +tags: [lt, open_source, onnx, translation, marian] +task: Translation +language: lt +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`scoris_maltese_english_lithuanian` is a Lithuanian model originally trained by scoris. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/scoris_maltese_english_lithuanian_lt_5.5.0_3.0_1725740422636.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/scoris_maltese_english_lithuanian_lt_5.5.0_3.0_1725740422636.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("scoris_maltese_english_lithuanian","lt") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("scoris_maltese_english_lithuanian","lt") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|scoris_maltese_english_lithuanian| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|lt| +|Size:|1.3 GB| + +## References + +https://huggingface.co/scoris/scoris-mt-en-lt \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-sent_albert_persian_farsi_zwnj_base_v2_pipeline_fa.md b/docs/_posts/ahmedlone127/2024-09-07-sent_albert_persian_farsi_zwnj_base_v2_pipeline_fa.md new file mode 100644 index 00000000000000..cfa7e6e3c7c38d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-sent_albert_persian_farsi_zwnj_base_v2_pipeline_fa.md @@ -0,0 +1,71 @@ +--- +layout: model +title: Persian sent_albert_persian_farsi_zwnj_base_v2_pipeline pipeline BertSentenceEmbeddings from HooshvareLab +author: John Snow Labs +name: sent_albert_persian_farsi_zwnj_base_v2_pipeline +date: 2024-09-07 +tags: [fa, open_source, pipeline, onnx] +task: Embeddings +language: fa +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertSentenceEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sent_albert_persian_farsi_zwnj_base_v2_pipeline` is a Persian model originally trained by HooshvareLab. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sent_albert_persian_farsi_zwnj_base_v2_pipeline_fa_5.5.0_3.0_1725724753332.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sent_albert_persian_farsi_zwnj_base_v2_pipeline_fa_5.5.0_3.0_1725724753332.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sent_albert_persian_farsi_zwnj_base_v2_pipeline", lang = "fa") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sent_albert_persian_farsi_zwnj_base_v2_pipeline", lang = "fa") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sent_albert_persian_farsi_zwnj_base_v2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|fa| +|Size:|42.4 MB| + +## References + +https://huggingface.co/HooshvareLab/albert-fa-zwnj-base-v2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- SentenceDetectorDLModel +- BertSentenceEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-sent_alephbertgimmel_base_512_he.md b/docs/_posts/ahmedlone127/2024-09-07-sent_alephbertgimmel_base_512_he.md new file mode 100644 index 00000000000000..7fccc54ed6afe1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-sent_alephbertgimmel_base_512_he.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Hebrew sent_alephbertgimmel_base_512 BertSentenceEmbeddings from imvladikon +author: John Snow Labs +name: sent_alephbertgimmel_base_512 +date: 2024-09-07 +tags: [he, open_source, onnx, sentence_embeddings, bert] +task: Embeddings +language: he +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertSentenceEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertSentenceEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sent_alephbertgimmel_base_512` is a Hebrew model originally trained by imvladikon. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sent_alephbertgimmel_base_512_he_5.5.0_3.0_1725700996962.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sent_alephbertgimmel_base_512_he_5.5.0_3.0_1725700996962.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("sentence") + +embeddings = BertSentenceEmbeddings.pretrained("sent_alephbertgimmel_base_512","he") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = BertSentenceEmbeddings.pretrained("sent_alephbertgimmel_base_512","he") + .setInputCols(Array("sentence")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sent_alephbertgimmel_base_512| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentence]| +|Output Labels:|[embeddings]| +|Language:|he| +|Size:|690.4 MB| + +## References + +https://huggingface.co/imvladikon/alephbertgimmel-base-512 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-sent_bertislav_cu.md b/docs/_posts/ahmedlone127/2024-09-07-sent_bertislav_cu.md new file mode 100644 index 00000000000000..9bfa66af615adf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-sent_bertislav_cu.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Church Slavic, Church Slavonic, Old Bulgarian, Old Church Slavonic, Old Slavonic sent_bertislav BertSentenceEmbeddings from npedrazzini +author: John Snow Labs +name: sent_bertislav +date: 2024-09-07 +tags: [cu, open_source, onnx, sentence_embeddings, bert] +task: Embeddings +language: cu +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertSentenceEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertSentenceEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sent_bertislav` is a Church Slavic, Church Slavonic, Old Bulgarian, Old Church Slavonic, Old Slavonic model originally trained by npedrazzini. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sent_bertislav_cu_5.5.0_3.0_1725724780226.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sent_bertislav_cu_5.5.0_3.0_1725724780226.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("sentence") + +embeddings = BertSentenceEmbeddings.pretrained("sent_bertislav","cu") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = BertSentenceEmbeddings.pretrained("sent_bertislav","cu") + .setInputCols(Array("sentence")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sent_bertislav| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentence]| +|Output Labels:|[embeddings]| +|Language:|cu| +|Size:|666.9 MB| + +## References + +https://huggingface.co/npedrazzini/BERTislav \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-sent_financialbert_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-sent_financialbert_pipeline_en.md new file mode 100644 index 00000000000000..b24dec0bfa8993 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-sent_financialbert_pipeline_en.md @@ -0,0 +1,71 @@ +--- +layout: model +title: English sent_financialbert_pipeline pipeline BertSentenceEmbeddings from ahmedrachid +author: John Snow Labs +name: sent_financialbert_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertSentenceEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sent_financialbert_pipeline` is a English model originally trained by ahmedrachid. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sent_financialbert_pipeline_en_5.5.0_3.0_1725700759612.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sent_financialbert_pipeline_en_5.5.0_3.0_1725700759612.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sent_financialbert_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sent_financialbert_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sent_financialbert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|410.0 MB| + +## References + +https://huggingface.co/ahmedrachid/FinancialBERT + +## Included Models + +- DocumentAssembler +- TokenizerModel +- SentenceDetectorDLModel +- BertSentenceEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-sent_medbert_en.md b/docs/_posts/ahmedlone127/2024-09-07-sent_medbert_en.md new file mode 100644 index 00000000000000..4892f1762c529d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-sent_medbert_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sent_medbert BertSentenceEmbeddings from Charangan +author: John Snow Labs +name: sent_medbert +date: 2024-09-07 +tags: [en, open_source, onnx, sentence_embeddings, bert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertSentenceEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertSentenceEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sent_medbert` is a English model originally trained by Charangan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sent_medbert_en_5.5.0_3.0_1725725547070.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sent_medbert_en_5.5.0_3.0_1725725547070.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("sentence") + +embeddings = BertSentenceEmbeddings.pretrained("sent_medbert","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = BertSentenceEmbeddings.pretrained("sent_medbert","en") + .setInputCols(Array("sentence")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sent_medbert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentence]| +|Output Labels:|[embeddings]| +|Language:|en| +|Size:|403.1 MB| + +## References + +https://huggingface.co/Charangan/MedBERT \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-sent_pharmbert_cased_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-sent_pharmbert_cased_pipeline_en.md new file mode 100644 index 00000000000000..72c54b1f088fa5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-sent_pharmbert_cased_pipeline_en.md @@ -0,0 +1,71 @@ +--- +layout: model +title: English sent_pharmbert_cased_pipeline pipeline BertSentenceEmbeddings from Lianglab +author: John Snow Labs +name: sent_pharmbert_cased_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertSentenceEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sent_pharmbert_cased_pipeline` is a English model originally trained by Lianglab. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sent_pharmbert_cased_pipeline_en_5.5.0_3.0_1725736655062.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sent_pharmbert_cased_pipeline_en_5.5.0_3.0_1725736655062.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sent_pharmbert_cased_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sent_pharmbert_cased_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sent_pharmbert_cased_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|404.1 MB| + +## References + +https://huggingface.co/Lianglab/PharmBERT-cased + +## Included Models + +- DocumentAssembler +- TokenizerModel +- SentenceDetectorDLModel +- BertSentenceEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-sent_xlm_roberta_base_finetuned_hkdse_english_paper4_en.md b/docs/_posts/ahmedlone127/2024-09-07-sent_xlm_roberta_base_finetuned_hkdse_english_paper4_en.md new file mode 100644 index 00000000000000..ec6e9c6038b977 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-sent_xlm_roberta_base_finetuned_hkdse_english_paper4_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sent_xlm_roberta_base_finetuned_hkdse_english_paper4 XlmRoBertaSentenceEmbeddings from Wootang01 +author: John Snow Labs +name: sent_xlm_roberta_base_finetuned_hkdse_english_paper4 +date: 2024-09-07 +tags: [en, open_source, onnx, sentence_embeddings, xlm_roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaSentenceEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaSentenceEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sent_xlm_roberta_base_finetuned_hkdse_english_paper4` is a English model originally trained by Wootang01. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sent_xlm_roberta_base_finetuned_hkdse_english_paper4_en_5.5.0_3.0_1725737771942.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sent_xlm_roberta_base_finetuned_hkdse_english_paper4_en_5.5.0_3.0_1725737771942.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("sentence") + +embeddings = XlmRoBertaSentenceEmbeddings.pretrained("sent_xlm_roberta_base_finetuned_hkdse_english_paper4","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = XlmRoBertaSentenceEmbeddings.pretrained("sent_xlm_roberta_base_finetuned_hkdse_english_paper4","en") + .setInputCols(Array("sentence")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sent_xlm_roberta_base_finetuned_hkdse_english_paper4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentence]| +|Output Labels:|[embeddings]| +|Language:|en| +|Size:|977.5 MB| + +## References + +https://huggingface.co/Wootang01/xlm-roberta-base-finetuned-hkdse-english-paper4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-sent_xlm_roberta_base_finetuned_hkdse_english_paper4_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-sent_xlm_roberta_base_finetuned_hkdse_english_paper4_pipeline_en.md new file mode 100644 index 00000000000000..92131c04e52372 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-sent_xlm_roberta_base_finetuned_hkdse_english_paper4_pipeline_en.md @@ -0,0 +1,71 @@ +--- +layout: model +title: English sent_xlm_roberta_base_finetuned_hkdse_english_paper4_pipeline pipeline XlmRoBertaSentenceEmbeddings from Wootang01 +author: John Snow Labs +name: sent_xlm_roberta_base_finetuned_hkdse_english_paper4_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaSentenceEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sent_xlm_roberta_base_finetuned_hkdse_english_paper4_pipeline` is a English model originally trained by Wootang01. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sent_xlm_roberta_base_finetuned_hkdse_english_paper4_pipeline_en_5.5.0_3.0_1725737843441.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sent_xlm_roberta_base_finetuned_hkdse_english_paper4_pipeline_en_5.5.0_3.0_1725737843441.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sent_xlm_roberta_base_finetuned_hkdse_english_paper4_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sent_xlm_roberta_base_finetuned_hkdse_english_paper4_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sent_xlm_roberta_base_finetuned_hkdse_english_paper4_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|978.1 MB| + +## References + +https://huggingface.co/Wootang01/xlm-roberta-base-finetuned-hkdse-english-paper4 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- SentenceDetectorDLModel +- XlmRoBertaSentenceEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-sent_xlm_roberta_longformer_base_4096_markussagen_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-sent_xlm_roberta_longformer_base_4096_markussagen_pipeline_en.md new file mode 100644 index 00000000000000..6f413b053b9455 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-sent_xlm_roberta_longformer_base_4096_markussagen_pipeline_en.md @@ -0,0 +1,71 @@ +--- +layout: model +title: English sent_xlm_roberta_longformer_base_4096_markussagen_pipeline pipeline XlmRoBertaSentenceEmbeddings from markussagen +author: John Snow Labs +name: sent_xlm_roberta_longformer_base_4096_markussagen_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaSentenceEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sent_xlm_roberta_longformer_base_4096_markussagen_pipeline` is a English model originally trained by markussagen. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sent_xlm_roberta_longformer_base_4096_markussagen_pipeline_en_5.5.0_3.0_1725682102845.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sent_xlm_roberta_longformer_base_4096_markussagen_pipeline_en_5.5.0_3.0_1725682102845.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sent_xlm_roberta_longformer_base_4096_markussagen_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sent_xlm_roberta_longformer_base_4096_markussagen_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sent_xlm_roberta_longformer_base_4096_markussagen_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.0 GB| + +## References + +https://huggingface.co/markussagen/xlm-roberta-longformer-base-4096 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- SentenceDetectorDLModel +- XlmRoBertaSentenceEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-sentencepiecebpe_nachos_french_en.md b/docs/_posts/ahmedlone127/2024-09-07-sentencepiecebpe_nachos_french_en.md new file mode 100644 index 00000000000000..8bb54a9fbf45af --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-sentencepiecebpe_nachos_french_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sentencepiecebpe_nachos_french CamemBertEmbeddings from BioMedTok +author: John Snow Labs +name: sentencepiecebpe_nachos_french +date: 2024-09-07 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sentencepiecebpe_nachos_french` is a English model originally trained by BioMedTok. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sentencepiecebpe_nachos_french_en_5.5.0_3.0_1725691301051.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sentencepiecebpe_nachos_french_en_5.5.0_3.0_1725691301051.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("sentencepiecebpe_nachos_french","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("sentencepiecebpe_nachos_french","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sentencepiecebpe_nachos_french| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|412.6 MB| + +## References + +https://huggingface.co/BioMedTok/SentencePieceBPE-NACHOS-FR \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-setfit_model_misinformation_on_organizations_gofundme_wef_en.md b/docs/_posts/ahmedlone127/2024-09-07-setfit_model_misinformation_on_organizations_gofundme_wef_en.md new file mode 100644 index 00000000000000..93f558bc8fb77b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-setfit_model_misinformation_on_organizations_gofundme_wef_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English setfit_model_misinformation_on_organizations_gofundme_wef MPNetEmbeddings from mitra-mir +author: John Snow Labs +name: setfit_model_misinformation_on_organizations_gofundme_wef +date: 2024-09-07 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`setfit_model_misinformation_on_organizations_gofundme_wef` is a English model originally trained by mitra-mir. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/setfit_model_misinformation_on_organizations_gofundme_wef_en_5.5.0_3.0_1725703294381.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/setfit_model_misinformation_on_organizations_gofundme_wef_en_5.5.0_3.0_1725703294381.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("setfit_model_misinformation_on_organizations_gofundme_wef","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("setfit_model_misinformation_on_organizations_gofundme_wef","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|setfit_model_misinformation_on_organizations_gofundme_wef| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/mitra-mir/setfit-model-Misinformation-on-Organizations-GoFundMe-WEF \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-spanish_finnish_all_quy_1_en.md b/docs/_posts/ahmedlone127/2024-09-07-spanish_finnish_all_quy_1_en.md new file mode 100644 index 00000000000000..54636f89caa80b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-spanish_finnish_all_quy_1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English spanish_finnish_all_quy_1 MarianTransformer from nouman-10 +author: John Snow Labs +name: spanish_finnish_all_quy_1 +date: 2024-09-07 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`spanish_finnish_all_quy_1` is a English model originally trained by nouman-10. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/spanish_finnish_all_quy_1_en_5.5.0_3.0_1725747587436.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/spanish_finnish_all_quy_1_en_5.5.0_3.0_1725747587436.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("spanish_finnish_all_quy_1","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("spanish_finnish_all_quy_1","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|spanish_finnish_all_quy_1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|530.5 MB| + +## References + +https://huggingface.co/nouman-10/es_fi_all_quy_1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-stanford_deidentifier_base_finetuned_ner_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-stanford_deidentifier_base_finetuned_ner_pipeline_en.md new file mode 100644 index 00000000000000..afafe4fc63a236 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-stanford_deidentifier_base_finetuned_ner_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English stanford_deidentifier_base_finetuned_ner_pipeline pipeline BertForTokenClassification from antoineedy +author: John Snow Labs +name: stanford_deidentifier_base_finetuned_ner_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`stanford_deidentifier_base_finetuned_ner_pipeline` is a English model originally trained by antoineedy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/stanford_deidentifier_base_finetuned_ner_pipeline_en_5.5.0_3.0_1725735078128.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/stanford_deidentifier_base_finetuned_ner_pipeline_en_5.5.0_3.0_1725735078128.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("stanford_deidentifier_base_finetuned_ner_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("stanford_deidentifier_base_finetuned_ner_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|stanford_deidentifier_base_finetuned_ner_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|408.2 MB| + +## References + +https://huggingface.co/antoineedy/stanford-deidentifier-base-finetuned-ner + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-taiyi_roberta_124m_d_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-taiyi_roberta_124m_d_pipeline_en.md new file mode 100644 index 00000000000000..f23fbb02370f14 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-taiyi_roberta_124m_d_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English taiyi_roberta_124m_d_pipeline pipeline RoBertaEmbeddings from IDEA-CCNL +author: John Snow Labs +name: taiyi_roberta_124m_d_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`taiyi_roberta_124m_d_pipeline` is a English model originally trained by IDEA-CCNL. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/taiyi_roberta_124m_d_pipeline_en_5.5.0_3.0_1725673018719.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/taiyi_roberta_124m_d_pipeline_en_5.5.0_3.0_1725673018719.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("taiyi_roberta_124m_d_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("taiyi_roberta_124m_d_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|taiyi_roberta_124m_d_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|466.0 MB| + +## References + +https://huggingface.co/IDEA-CCNL/Taiyi-Roberta-124M-D + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-testchatbotmodel1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-testchatbotmodel1_pipeline_en.md new file mode 100644 index 00000000000000..ddad965ee88dff --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-testchatbotmodel1_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English testchatbotmodel1_pipeline pipeline DistilBertForQuestionAnswering from TheoND +author: John Snow Labs +name: testchatbotmodel1_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`testchatbotmodel1_pipeline` is a English model originally trained by TheoND. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/testchatbotmodel1_pipeline_en_5.5.0_3.0_1725695572615.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/testchatbotmodel1_pipeline_en_5.5.0_3.0_1725695572615.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("testchatbotmodel1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("testchatbotmodel1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|testchatbotmodel1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/TheoND/testchatbotmodel1 + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-testing_en.md b/docs/_posts/ahmedlone127/2024-09-07-testing_en.md new file mode 100644 index 00000000000000..2b5eac336e9e51 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-testing_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English testing DistilBertForQuestionAnswering from Sybghat +author: John Snow Labs +name: testing +date: 2024-09-07 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`testing` is a English model originally trained by Sybghat. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/testing_en_5.5.0_3.0_1725695525272.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/testing_en_5.5.0_3.0_1725695525272.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("testing","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("testing", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|testing| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Sybghat/Testing \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-whisper_gujarati_small_gu.md b/docs/_posts/ahmedlone127/2024-09-07-whisper_gujarati_small_gu.md new file mode 100644 index 00000000000000..33a5325a11cb09 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-whisper_gujarati_small_gu.md @@ -0,0 +1,84 @@ +--- +layout: model +title: Gujarati whisper_gujarati_small WhisperForCTC from vasista22 +author: John Snow Labs +name: whisper_gujarati_small +date: 2024-09-07 +tags: [gu, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: gu +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_gujarati_small` is a Gujarati model originally trained by vasista22. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_gujarati_small_gu_5.5.0_3.0_1725752697591.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_gujarati_small_gu_5.5.0_3.0_1725752697591.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("whisper_gujarati_small","gu") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("whisper_gujarati_small", "gu") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_gujarati_small| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|gu| +|Size:|1.7 GB| + +## References + +https://huggingface.co/vasista22/whisper-gujarati-small \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-xlm_r_squad_serbian_lat_pipeline_sr.md b/docs/_posts/ahmedlone127/2024-09-07-xlm_r_squad_serbian_lat_pipeline_sr.md new file mode 100644 index 00000000000000..bcdb6317b0cad5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-xlm_r_squad_serbian_lat_pipeline_sr.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Serbian xlm_r_squad_serbian_lat_pipeline pipeline XlmRoBertaForQuestionAnswering from aleksahet +author: John Snow Labs +name: xlm_r_squad_serbian_lat_pipeline +date: 2024-09-07 +tags: [sr, open_source, pipeline, onnx] +task: Question Answering +language: sr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_r_squad_serbian_lat_pipeline` is a Serbian model originally trained by aleksahet. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_r_squad_serbian_lat_pipeline_sr_5.5.0_3.0_1725686224800.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_r_squad_serbian_lat_pipeline_sr_5.5.0_3.0_1725686224800.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_r_squad_serbian_lat_pipeline", lang = "sr") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_r_squad_serbian_lat_pipeline", lang = "sr") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_r_squad_serbian_lat_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|sr| +|Size:|816.7 MB| + +## References + +https://huggingface.co/aleksahet/xlm-r-squad-sr-lat + +## Included Models + +- MultiDocumentAssembler +- XlmRoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-xlm_r_squad_serbian_lat_sr.md b/docs/_posts/ahmedlone127/2024-09-07-xlm_r_squad_serbian_lat_sr.md new file mode 100644 index 00000000000000..27f07063f281f7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-xlm_r_squad_serbian_lat_sr.md @@ -0,0 +1,86 @@ +--- +layout: model +title: Serbian xlm_r_squad_serbian_lat XlmRoBertaForQuestionAnswering from aleksahet +author: John Snow Labs +name: xlm_r_squad_serbian_lat +date: 2024-09-07 +tags: [sr, open_source, onnx, question_answering, xlm_roberta] +task: Question Answering +language: sr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_r_squad_serbian_lat` is a Serbian model originally trained by aleksahet. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_r_squad_serbian_lat_sr_5.5.0_3.0_1725686103451.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_r_squad_serbian_lat_sr_5.5.0_3.0_1725686103451.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = XlmRoBertaForQuestionAnswering.pretrained("xlm_r_squad_serbian_lat","sr") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = XlmRoBertaForQuestionAnswering.pretrained("xlm_r_squad_serbian_lat", "sr") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_r_squad_serbian_lat| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|sr| +|Size:|816.7 MB| + +## References + +https://huggingface.co/aleksahet/xlm-r-squad-sr-lat \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_all_ahmad_alismail_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_all_ahmad_alismail_pipeline_en.md new file mode 100644 index 00000000000000..9405fb15795855 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_all_ahmad_alismail_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_all_ahmad_alismail_pipeline pipeline XlmRoBertaForTokenClassification from ahmad-alismail +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_all_ahmad_alismail_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_all_ahmad_alismail_pipeline` is a English model originally trained by ahmad-alismail. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_all_ahmad_alismail_pipeline_en_5.5.0_3.0_1725687226364.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_all_ahmad_alismail_pipeline_en_5.5.0_3.0_1725687226364.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_all_ahmad_alismail_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_all_ahmad_alismail_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_all_ahmad_alismail_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|858.2 MB| + +## References + +https://huggingface.co/ahmad-alismail/xlm-roberta-base-finetuned-panx-all + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_english_bessho_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_english_bessho_pipeline_en.md new file mode 100644 index 00000000000000..82ed66244454e7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_english_bessho_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_english_bessho_pipeline pipeline XlmRoBertaForTokenClassification from bessho +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_english_bessho_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_english_bessho_pipeline` is a English model originally trained by bessho. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_english_bessho_pipeline_en_5.5.0_3.0_1725743494181.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_english_bessho_pipeline_en_5.5.0_3.0_1725743494181.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_english_bessho_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_english_bessho_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_english_bessho_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|826.4 MB| + +## References + +https://huggingface.co/bessho/xlm-roberta-base-finetuned-panx-en + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_french_henryjiang_en.md b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_french_henryjiang_en.md new file mode 100644 index 00000000000000..454885f2cb1293 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_french_henryjiang_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_henryjiang XlmRoBertaForTokenClassification from henryjiang +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_henryjiang +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_henryjiang` is a English model originally trained by henryjiang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_henryjiang_en_5.5.0_3.0_1725744385576.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_henryjiang_en_5.5.0_3.0_1725744385576.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_henryjiang","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_henryjiang", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_henryjiang| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|843.5 MB| + +## References + +https://huggingface.co/henryjiang/xlm-roberta-base-finetuned-panx-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_german_french_benjiccee_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_german_french_benjiccee_pipeline_en.md new file mode 100644 index 00000000000000..e0e426d9bb8a55 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_german_french_benjiccee_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_benjiccee_pipeline pipeline XlmRoBertaForTokenClassification from Benjiccee +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_benjiccee_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_benjiccee_pipeline` is a English model originally trained by Benjiccee. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_benjiccee_pipeline_en_5.5.0_3.0_1725694616239.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_benjiccee_pipeline_en_5.5.0_3.0_1725694616239.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_benjiccee_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_benjiccee_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_benjiccee_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|858.2 MB| + +## References + +https://huggingface.co/Benjiccee/xlm-roberta-base-finetuned-panx-de-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_german_french_transformersbook_en.md b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_german_french_transformersbook_en.md new file mode 100644 index 00000000000000..5767b8c4b88ae8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_german_french_transformersbook_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_transformersbook XlmRoBertaForTokenClassification from transformersbook +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_transformersbook +date: 2024-09-07 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_transformersbook` is a English model originally trained by transformersbook. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_transformersbook_en_5.5.0_3.0_1725745360239.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_transformersbook_en_5.5.0_3.0_1725745360239.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_transformersbook","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_transformersbook", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_transformersbook| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|858.2 MB| + +## References + +https://huggingface.co/transformersbook/xlm-roberta-base-finetuned-panx-de-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_german_french_yasu320001_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_german_french_yasu320001_pipeline_en.md new file mode 100644 index 00000000000000..5afee387876e03 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_german_french_yasu320001_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_yasu320001_pipeline pipeline XlmRoBertaForTokenClassification from yasu320001 +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_yasu320001_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_yasu320001_pipeline` is a English model originally trained by yasu320001. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_yasu320001_pipeline_en_5.5.0_3.0_1725705622791.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_yasu320001_pipeline_en_5.5.0_3.0_1725705622791.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_yasu320001_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_yasu320001_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_yasu320001_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|858.2 MB| + +## References + +https://huggingface.co/yasu320001/xlm-roberta-base-finetuned-panx-de-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_german_french_yezune_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_german_french_yezune_pipeline_en.md new file mode 100644 index 00000000000000..232bd39de07313 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_panx_german_french_yezune_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_yezune_pipeline pipeline XlmRoBertaForTokenClassification from yezune +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_yezune_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_yezune_pipeline` is a English model originally trained by yezune. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_yezune_pipeline_en_5.5.0_3.0_1725705268079.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_yezune_pipeline_en_5.5.0_3.0_1725705268079.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_yezune_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_yezune_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_yezune_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|858.2 MB| + +## References + +https://huggingface.co/yezune/xlm-roberta-base-finetuned-panx-de-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_sayula_popoluca_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_sayula_popoluca_pipeline_en.md new file mode 100644 index 00000000000000..6b4bda5a25cd24 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_finetuned_sayula_popoluca_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_sayula_popoluca_pipeline pipeline XlmRoBertaForTokenClassification from muhammadbilal +author: John Snow Labs +name: xlm_roberta_base_finetuned_sayula_popoluca_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_sayula_popoluca_pipeline` is a English model originally trained by muhammadbilal. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_sayula_popoluca_pipeline_en_5.5.0_3.0_1725687354380.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_sayula_popoluca_pipeline_en_5.5.0_3.0_1725687354380.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_sayula_popoluca_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_sayula_popoluca_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_sayula_popoluca_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|842.3 MB| + +## References + +https://huggingface.co/muhammadbilal/xlm-roberta-base-finetuned-pos + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_panx_dataset_korean_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_panx_dataset_korean_pipeline_en.md new file mode 100644 index 00000000000000..04c3139f17ab50 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_base_panx_dataset_korean_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_panx_dataset_korean_pipeline pipeline XlmRoBertaForTokenClassification from tner +author: John Snow Labs +name: xlm_roberta_base_panx_dataset_korean_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_panx_dataset_korean_pipeline` is a English model originally trained by tner. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_panx_dataset_korean_pipeline_en_5.5.0_3.0_1725688773910.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_panx_dataset_korean_pipeline_en_5.5.0_3.0_1725688773910.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_panx_dataset_korean_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_panx_dataset_korean_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_panx_dataset_korean_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|786.8 MB| + +## References + +https://huggingface.co/tner/xlm-roberta-base-panx-dataset-ko + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_english_russian_emoji_v2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_english_russian_emoji_v2_pipeline_en.md new file mode 100644 index 00000000000000..c7da3c451a5118 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-xlm_roberta_english_russian_emoji_v2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_english_russian_emoji_v2_pipeline pipeline XlmRoBertaForSequenceClassification from amazon-sagemaker-community +author: John Snow Labs +name: xlm_roberta_english_russian_emoji_v2_pipeline +date: 2024-09-07 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_english_russian_emoji_v2_pipeline` is a English model originally trained by amazon-sagemaker-community. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_english_russian_emoji_v2_pipeline_en_5.5.0_3.0_1725711783638.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_english_russian_emoji_v2_pipeline_en_5.5.0_3.0_1725711783638.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_english_russian_emoji_v2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_english_russian_emoji_v2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_english_russian_emoji_v2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/amazon-sagemaker-community/xlm-roberta-en-ru-emoji-v2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-07-xlmroberta_ner_xlm_roberta_base_finetuned_panx_ner_it.md b/docs/_posts/ahmedlone127/2024-09-07-xlmroberta_ner_xlm_roberta_base_finetuned_panx_ner_it.md new file mode 100644 index 00000000000000..bfdc061418f04d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-07-xlmroberta_ner_xlm_roberta_base_finetuned_panx_ner_it.md @@ -0,0 +1,112 @@ +--- +layout: model +title: Italian Named Entity Recognition (from gunghio) +author: John Snow Labs +name: xlmroberta_ner_xlm_roberta_base_finetuned_panx_ner +date: 2024-09-07 +tags: [xlm_roberta, ner, token_classification, it, open_source, onnx] +task: Named Entity Recognition +language: it +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained Named Entity Recognition model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `xlm-roberta-base-finetuned-panx-ner` is a Italian model orginally trained by `gunghio`. + +## Predicted Entities + +`LOC`, `ORG`, `PER` + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_xlm_roberta_base_finetuned_panx_ner_it_5.5.0_3.0_1725743869379.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_xlm_roberta_base_finetuned_panx_ner_it_5.5.0_3.0_1725743869379.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDetector = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx")\ + .setInputCols(["document"])\ + .setOutputCol("sentence") + +tokenizer = Tokenizer() \ + .setInputCols("sentence") \ + .setOutputCol("token") + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlmroberta_ner_xlm_roberta_base_finetuned_panx_ner","it") \ + .setInputCols(["sentence", "token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline(stages=[documentAssembler, sentenceDetector, tokenizer, tokenClassifier]) + +data = spark.createDataFrame([["Adoro Spark NLP"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val sentenceDetector = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val tokenizer = new Tokenizer() + .setInputCols(Array("sentence")) + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlmroberta_ner_xlm_roberta_base_finetuned_panx_ner","it") + .setInputCols(Array("sentence", "token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler,sentenceDetector, tokenizer, tokenClassifier)) + +val data = Seq("Adoro Spark NLP").toDF("text") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("it.ner.xlmr_roberta.xtreme.base_finetuned").predict("""Adoro Spark NLP""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlmroberta_ner_xlm_roberta_base_finetuned_panx_ner| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|it| +|Size:|877.6 MB| + +## References + +References + +- https://huggingface.co/gunghio/xlm-roberta-base-finetuned-panx-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-8_shot_sta_head_trained_lr1e_4_en.md b/docs/_posts/ahmedlone127/2024-09-08-8_shot_sta_head_trained_lr1e_4_en.md new file mode 100644 index 00000000000000..077f74ede67510 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-8_shot_sta_head_trained_lr1e_4_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English 8_shot_sta_head_trained_lr1e_4 MPNetEmbeddings from Nhat1904 +author: John Snow Labs +name: 8_shot_sta_head_trained_lr1e_4 +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`8_shot_sta_head_trained_lr1e_4` is a English model originally trained by Nhat1904. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/8_shot_sta_head_trained_lr1e_4_en_5.5.0_3.0_1725815843340.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/8_shot_sta_head_trained_lr1e_4_en_5.5.0_3.0_1725815843340.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("8_shot_sta_head_trained_lr1e_4","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("8_shot_sta_head_trained_lr1e_4","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|8_shot_sta_head_trained_lr1e_4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.0 MB| + +## References + +https://huggingface.co/Nhat1904/8_shot_STA_head_trained_lr1e-4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-albert_model__25_3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-albert_model__25_3_pipeline_en.md new file mode 100644 index 00000000000000..77295e633c42cf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-albert_model__25_3_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English albert_model__25_3_pipeline pipeline DistilBertForSequenceClassification from KalaiselvanD +author: John Snow Labs +name: albert_model__25_3_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`albert_model__25_3_pipeline` is a English model originally trained by KalaiselvanD. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/albert_model__25_3_pipeline_en_5.5.0_3.0_1725808736728.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/albert_model__25_3_pipeline_en_5.5.0_3.0_1725808736728.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("albert_model__25_3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("albert_model__25_3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|albert_model__25_3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/KalaiselvanD/albert_model__25_3 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-albert_xxlarge_v2_disaster_twitter_preprocess_data_en.md b/docs/_posts/ahmedlone127/2024-09-08-albert_xxlarge_v2_disaster_twitter_preprocess_data_en.md new file mode 100644 index 00000000000000..257f80d073bde5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-albert_xxlarge_v2_disaster_twitter_preprocess_data_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English albert_xxlarge_v2_disaster_twitter_preprocess_data AlbertForSequenceClassification from JiaJiaCen +author: John Snow Labs +name: albert_xxlarge_v2_disaster_twitter_preprocess_data +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, albert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: AlbertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained AlbertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`albert_xxlarge_v2_disaster_twitter_preprocess_data` is a English model originally trained by JiaJiaCen. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/albert_xxlarge_v2_disaster_twitter_preprocess_data_en_5.5.0_3.0_1725755547311.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/albert_xxlarge_v2_disaster_twitter_preprocess_data_en_5.5.0_3.0_1725755547311.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = AlbertForSequenceClassification.pretrained("albert_xxlarge_v2_disaster_twitter_preprocess_data","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = AlbertForSequenceClassification.pretrained("albert_xxlarge_v2_disaster_twitter_preprocess_data", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|albert_xxlarge_v2_disaster_twitter_preprocess_data| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|833.9 MB| + +## References + +https://huggingface.co/JiaJiaCen/albert-xxlarge-v2-disaster-twitter-preprocess_data \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-all_mpnet_base_newtriplets_v2_lr_1e_8_m_5_e_3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-all_mpnet_base_newtriplets_v2_lr_1e_8_m_5_e_3_pipeline_en.md new file mode 100644 index 00000000000000..4b5f6047b8a6bf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-all_mpnet_base_newtriplets_v2_lr_1e_8_m_5_e_3_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English all_mpnet_base_newtriplets_v2_lr_1e_8_m_5_e_3_pipeline pipeline MPNetEmbeddings from luiz-and-robert-thesis +author: John Snow Labs +name: all_mpnet_base_newtriplets_v2_lr_1e_8_m_5_e_3_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_base_newtriplets_v2_lr_1e_8_m_5_e_3_pipeline` is a English model originally trained by luiz-and-robert-thesis. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_base_newtriplets_v2_lr_1e_8_m_5_e_3_pipeline_en_5.5.0_3.0_1725817134191.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_base_newtriplets_v2_lr_1e_8_m_5_e_3_pipeline_en_5.5.0_3.0_1725817134191.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("all_mpnet_base_newtriplets_v2_lr_1e_8_m_5_e_3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("all_mpnet_base_newtriplets_v2_lr_1e_8_m_5_e_3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_base_newtriplets_v2_lr_1e_8_m_5_e_3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-1e-8-m-5-e-3 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-all_mpnet_base_v2_bioasq_1epoch_batch32_100steps_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-all_mpnet_base_v2_bioasq_1epoch_batch32_100steps_pipeline_en.md new file mode 100644 index 00000000000000..c93c9171bca694 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-all_mpnet_base_v2_bioasq_1epoch_batch32_100steps_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English all_mpnet_base_v2_bioasq_1epoch_batch32_100steps_pipeline pipeline MPNetEmbeddings from juanpablomesa +author: John Snow Labs +name: all_mpnet_base_v2_bioasq_1epoch_batch32_100steps_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_base_v2_bioasq_1epoch_batch32_100steps_pipeline` is a English model originally trained by juanpablomesa. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_bioasq_1epoch_batch32_100steps_pipeline_en_5.5.0_3.0_1725816874316.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_bioasq_1epoch_batch32_100steps_pipeline_en_5.5.0_3.0_1725816874316.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("all_mpnet_base_v2_bioasq_1epoch_batch32_100steps_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("all_mpnet_base_v2_bioasq_1epoch_batch32_100steps_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_base_v2_bioasq_1epoch_batch32_100steps_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/juanpablomesa/all-mpnet-base-v2-bioasq-1epoch-batch32-100steps + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-all_mpnet_base_v2_sts_jilangdi_en.md b/docs/_posts/ahmedlone127/2024-09-08-all_mpnet_base_v2_sts_jilangdi_en.md new file mode 100644 index 00000000000000..897999a251775d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-all_mpnet_base_v2_sts_jilangdi_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English all_mpnet_base_v2_sts_jilangdi MPNetEmbeddings from jilangdi +author: John Snow Labs +name: all_mpnet_base_v2_sts_jilangdi +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_base_v2_sts_jilangdi` is a English model originally trained by jilangdi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_sts_jilangdi_en_5.5.0_3.0_1725817108086.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_sts_jilangdi_en_5.5.0_3.0_1725817108086.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("all_mpnet_base_v2_sts_jilangdi","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("all_mpnet_base_v2_sts_jilangdi","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_base_v2_sts_jilangdi| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.7 MB| + +## References + +https://huggingface.co/jilangdi/all-mpnet-base-v2-sts \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-all_mpnet_base_v2_sts_jilangdi_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-all_mpnet_base_v2_sts_jilangdi_pipeline_en.md new file mode 100644 index 00000000000000..3750f00bf2759b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-all_mpnet_base_v2_sts_jilangdi_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English all_mpnet_base_v2_sts_jilangdi_pipeline pipeline MPNetEmbeddings from jilangdi +author: John Snow Labs +name: all_mpnet_base_v2_sts_jilangdi_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_base_v2_sts_jilangdi_pipeline` is a English model originally trained by jilangdi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_sts_jilangdi_pipeline_en_5.5.0_3.0_1725817134568.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_sts_jilangdi_pipeline_en_5.5.0_3.0_1725817134568.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("all_mpnet_base_v2_sts_jilangdi_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("all_mpnet_base_v2_sts_jilangdi_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_base_v2_sts_jilangdi_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.7 MB| + +## References + +https://huggingface.co/jilangdi/all-mpnet-base-v2-sts + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-autotrain_event_en.md b/docs/_posts/ahmedlone127/2024-09-08-autotrain_event_en.md new file mode 100644 index 00000000000000..3bc377222a973e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-autotrain_event_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English autotrain_event BertForSequenceClassification from Woao +author: John Snow Labs +name: autotrain_event +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`autotrain_event` is a English model originally trained by Woao. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/autotrain_event_en_5.5.0_3.0_1725825732574.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/autotrain_event_en_5.5.0_3.0_1725825732574.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("autotrain_event","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("autotrain_event", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|autotrain_event| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|383.0 MB| + +## References + +https://huggingface.co/Woao/autotrain-EVENT \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-babyberta_aochildes_2_5m_aochildes_french_without_masking_seed6_finetuned_squad_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-babyberta_aochildes_2_5m_aochildes_french_without_masking_seed6_finetuned_squad_pipeline_en.md new file mode 100644 index 00000000000000..4874386a3b900e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-babyberta_aochildes_2_5m_aochildes_french_without_masking_seed6_finetuned_squad_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English babyberta_aochildes_2_5m_aochildes_french_without_masking_seed6_finetuned_squad_pipeline pipeline RoBertaForQuestionAnswering from lielbin +author: John Snow Labs +name: babyberta_aochildes_2_5m_aochildes_french_without_masking_seed6_finetuned_squad_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`babyberta_aochildes_2_5m_aochildes_french_without_masking_seed6_finetuned_squad_pipeline` is a English model originally trained by lielbin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/babyberta_aochildes_2_5m_aochildes_french_without_masking_seed6_finetuned_squad_pipeline_en_5.5.0_3.0_1725833285849.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/babyberta_aochildes_2_5m_aochildes_french_without_masking_seed6_finetuned_squad_pipeline_en_5.5.0_3.0_1725833285849.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("babyberta_aochildes_2_5m_aochildes_french_without_masking_seed6_finetuned_squad_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("babyberta_aochildes_2_5m_aochildes_french_without_masking_seed6_finetuned_squad_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|babyberta_aochildes_2_5m_aochildes_french_without_masking_seed6_finetuned_squad_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|32.0 MB| + +## References + +https://huggingface.co/lielbin/BabyBERTa-aochildes_2.5M_aochildes-french-without-Masking-seed6-finetuned-SQuAD + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-babyberta_wikipedia_french1_25m_wikipedia1_1_25mm_without_masking_finetuned_squad_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-babyberta_wikipedia_french1_25m_wikipedia1_1_25mm_without_masking_finetuned_squad_pipeline_en.md new file mode 100644 index 00000000000000..12bf3dfaa9a372 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-babyberta_wikipedia_french1_25m_wikipedia1_1_25mm_without_masking_finetuned_squad_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English babyberta_wikipedia_french1_25m_wikipedia1_1_25mm_without_masking_finetuned_squad_pipeline pipeline RoBertaForQuestionAnswering from lielbin +author: John Snow Labs +name: babyberta_wikipedia_french1_25m_wikipedia1_1_25mm_without_masking_finetuned_squad_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`babyberta_wikipedia_french1_25m_wikipedia1_1_25mm_without_masking_finetuned_squad_pipeline` is a English model originally trained by lielbin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/babyberta_wikipedia_french1_25m_wikipedia1_1_25mm_without_masking_finetuned_squad_pipeline_en_5.5.0_3.0_1725833632572.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/babyberta_wikipedia_french1_25m_wikipedia1_1_25mm_without_masking_finetuned_squad_pipeline_en_5.5.0_3.0_1725833632572.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("babyberta_wikipedia_french1_25m_wikipedia1_1_25mm_without_masking_finetuned_squad_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("babyberta_wikipedia_french1_25m_wikipedia1_1_25mm_without_masking_finetuned_squad_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|babyberta_wikipedia_french1_25m_wikipedia1_1_25mm_without_masking_finetuned_squad_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|31.9 MB| + +## References + +https://huggingface.co/lielbin/BabyBERTa-wikipedia_french1.25M_wikipedia1_1.25MM-without-Masking-finetuned-SQuAD + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-bcms_bertic_parlasent_bcs_ter_pipeline_hr.md b/docs/_posts/ahmedlone127/2024-09-08-bcms_bertic_parlasent_bcs_ter_pipeline_hr.md new file mode 100644 index 00000000000000..e24d9d570aa3ac --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-bcms_bertic_parlasent_bcs_ter_pipeline_hr.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Croatian bcms_bertic_parlasent_bcs_ter_pipeline pipeline BertForSequenceClassification from classla +author: John Snow Labs +name: bcms_bertic_parlasent_bcs_ter_pipeline +date: 2024-09-08 +tags: [hr, open_source, pipeline, onnx] +task: Text Classification +language: hr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bcms_bertic_parlasent_bcs_ter_pipeline` is a Croatian model originally trained by classla. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bcms_bertic_parlasent_bcs_ter_pipeline_hr_5.5.0_3.0_1725826074429.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bcms_bertic_parlasent_bcs_ter_pipeline_hr_5.5.0_3.0_1725826074429.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bcms_bertic_parlasent_bcs_ter_pipeline", lang = "hr") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bcms_bertic_parlasent_bcs_ter_pipeline", lang = "hr") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bcms_bertic_parlasent_bcs_ter_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|hr| +|Size:|414.9 MB| + +## References + +https://huggingface.co/classla/bcms-bertic-parlasent-bcs-ter + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-beep_kanuri_medium_hate_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-beep_kanuri_medium_hate_pipeline_en.md new file mode 100644 index 00000000000000..73e4781c7a8c1d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-beep_kanuri_medium_hate_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English beep_kanuri_medium_hate_pipeline pipeline BertForSequenceClassification from beomi +author: John Snow Labs +name: beep_kanuri_medium_hate_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`beep_kanuri_medium_hate_pipeline` is a English model originally trained by beomi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/beep_kanuri_medium_hate_pipeline_en_5.5.0_3.0_1725819630330.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/beep_kanuri_medium_hate_pipeline_en_5.5.0_3.0_1725819630330.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("beep_kanuri_medium_hate_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("beep_kanuri_medium_hate_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|beep_kanuri_medium_hate_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|380.3 MB| + +## References + +https://huggingface.co/beomi/beep-KR-Medium-hate + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-bert_base_uncased_twitter_sentiment_analysis_v2_en.md b/docs/_posts/ahmedlone127/2024-09-08-bert_base_uncased_twitter_sentiment_analysis_v2_en.md new file mode 100644 index 00000000000000..01c75f568f51b2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-bert_base_uncased_twitter_sentiment_analysis_v2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bert_base_uncased_twitter_sentiment_analysis_v2 BertForSequenceClassification from DunnBC22 +author: John Snow Labs +name: bert_base_uncased_twitter_sentiment_analysis_v2 +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_uncased_twitter_sentiment_analysis_v2` is a English model originally trained by DunnBC22. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_uncased_twitter_sentiment_analysis_v2_en_5.5.0_3.0_1725825451058.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_uncased_twitter_sentiment_analysis_v2_en_5.5.0_3.0_1725825451058.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("bert_base_uncased_twitter_sentiment_analysis_v2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("bert_base_uncased_twitter_sentiment_analysis_v2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_uncased_twitter_sentiment_analysis_v2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/DunnBC22/bert-base-uncased-Twitter_Sentiment_Analysis_v2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-bert_base_uncased_twitter_sentiment_analysis_v2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-bert_base_uncased_twitter_sentiment_analysis_v2_pipeline_en.md new file mode 100644 index 00000000000000..9ef0ebaef83e6b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-bert_base_uncased_twitter_sentiment_analysis_v2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bert_base_uncased_twitter_sentiment_analysis_v2_pipeline pipeline BertForSequenceClassification from DunnBC22 +author: John Snow Labs +name: bert_base_uncased_twitter_sentiment_analysis_v2_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_uncased_twitter_sentiment_analysis_v2_pipeline` is a English model originally trained by DunnBC22. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_uncased_twitter_sentiment_analysis_v2_pipeline_en_5.5.0_3.0_1725825470615.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_uncased_twitter_sentiment_analysis_v2_pipeline_en_5.5.0_3.0_1725825470615.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_base_uncased_twitter_sentiment_analysis_v2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_base_uncased_twitter_sentiment_analysis_v2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_uncased_twitter_sentiment_analysis_v2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/DunnBC22/bert-base-uncased-Twitter_Sentiment_Analysis_v2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-bert_cancer_type_extraction2_en.md b/docs/_posts/ahmedlone127/2024-09-08-bert_cancer_type_extraction2_en.md new file mode 100644 index 00000000000000..b37fae82df22cb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-bert_cancer_type_extraction2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bert_cancer_type_extraction2 DistilBertForTokenClassification from DrM +author: John Snow Labs +name: bert_cancer_type_extraction2 +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_cancer_type_extraction2` is a English model originally trained by DrM. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_cancer_type_extraction2_en_5.5.0_3.0_1725837190313.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_cancer_type_extraction2_en_5.5.0_3.0_1725837190313.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("bert_cancer_type_extraction2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("bert_cancer_type_extraction2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_cancer_type_extraction2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|505.4 MB| + +## References + +https://huggingface.co/DrM/BERT_Cancer_type_extraction2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-bert_classifier_sead_l_6_h_256_a_8_sst2_en.md b/docs/_posts/ahmedlone127/2024-09-08-bert_classifier_sead_l_6_h_256_a_8_sst2_en.md new file mode 100644 index 00000000000000..6f1b5a1af55090 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-bert_classifier_sead_l_6_h_256_a_8_sst2_en.md @@ -0,0 +1,111 @@ +--- +layout: model +title: English BertForSequenceClassification Cased model (from course5i) +author: John Snow Labs +name: bert_classifier_sead_l_6_h_256_a_8_sst2 +date: 2024-09-08 +tags: [en, open_source, bert, sequence_classification, classification, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `SEAD-L-6_H-256_A-8-sst2` is a English model originally trained by `course5i`. + +## Predicted Entities + +`0`, `1` + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_classifier_sead_l_6_h_256_a_8_sst2_en_5.5.0_3.0_1725801662716.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_classifier_sead_l_6_h_256_a_8_sst2_en_5.5.0_3.0_1725801662716.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +seq_classifier = BertForSequenceClassification.pretrained("bert_classifier_sead_l_6_h_256_a_8_sst2","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("class") + +pipeline = Pipeline(stages=[documentAssembler, tokenizer, seq_classifier]) + +data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new DocumentAssembler() + .setInputCols(Array("text")) + .setOutputCols(Array("document")) + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val seq_classifier = BertForSequenceClassification.pretrained("bert_classifier_sead_l_6_h_256_a_8_sst2","en") + .setInputCols(Array("document", "token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, seq_classifier)) + +val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("en.classify.bert.glue_sst2.6l_256d_a8a_256d.by_course5i").predict("""PUT YOUR STRING HERE""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_classifier_sead_l_6_h_256_a_8_sst2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|47.3 MB| + +## References + +References + +- https://huggingface.co/course5i/SEAD-L-6_H-256_A-8-sst2 +- https://arxiv.org/abs/1910.01108 +- https://arxiv.org/abs/1909.10351 +- https://arxiv.org/abs/2002.10957 +- https://arxiv.org/abs/1810.04805 +- https://arxiv.org/abs/1804.07461 +- https://arxiv.org/abs/1905.00537 +- https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-bert_finetuned_ner_t1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-bert_finetuned_ner_t1_pipeline_en.md new file mode 100644 index 00000000000000..6200731c9efb6d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-bert_finetuned_ner_t1_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bert_finetuned_ner_t1_pipeline pipeline DistilBertForTokenClassification from avi10 +author: John Snow Labs +name: bert_finetuned_ner_t1_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_finetuned_ner_t1_pipeline` is a English model originally trained by avi10. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_finetuned_ner_t1_pipeline_en_5.5.0_3.0_1725837292826.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_finetuned_ner_t1_pipeline_en_5.5.0_3.0_1725837292826.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_finetuned_ner_t1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_finetuned_ner_t1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_finetuned_ner_t1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/avi10/bert-finetuned-ner-T1 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-bert_french_ner_datascience_service_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-bert_french_ner_datascience_service_pipeline_en.md new file mode 100644 index 00000000000000..0cb169501d8de4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-bert_french_ner_datascience_service_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bert_french_ner_datascience_service_pipeline pipeline BertForTokenClassification from datascience-service +author: John Snow Labs +name: bert_french_ner_datascience_service_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_french_ner_datascience_service_pipeline` is a English model originally trained by datascience-service. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_french_ner_datascience_service_pipeline_en_5.5.0_3.0_1725834923011.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_french_ner_datascience_service_pipeline_en_5.5.0_3.0_1725834923011.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_french_ner_datascience_service_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_french_ner_datascience_service_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_french_ner_datascience_service_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|665.1 MB| + +## References + +https://huggingface.co/datascience-service/bert-french-ner + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-bert_phishing_classifier_teacher_en.md b/docs/_posts/ahmedlone127/2024-09-08-bert_phishing_classifier_teacher_en.md new file mode 100644 index 00000000000000..22863f91ee4fb3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-bert_phishing_classifier_teacher_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bert_phishing_classifier_teacher BertForSequenceClassification from shawhin +author: John Snow Labs +name: bert_phishing_classifier_teacher +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_phishing_classifier_teacher` is a English model originally trained by shawhin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_phishing_classifier_teacher_en_5.5.0_3.0_1725826192745.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_phishing_classifier_teacher_en_5.5.0_3.0_1725826192745.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("bert_phishing_classifier_teacher","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("bert_phishing_classifier_teacher", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_phishing_classifier_teacher| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/shawhin/bert-phishing-classifier_teacher \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-bert_phishing_classifier_teacher_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-bert_phishing_classifier_teacher_pipeline_en.md new file mode 100644 index 00000000000000..ae555524b97820 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-bert_phishing_classifier_teacher_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bert_phishing_classifier_teacher_pipeline pipeline BertForSequenceClassification from shawhin +author: John Snow Labs +name: bert_phishing_classifier_teacher_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_phishing_classifier_teacher_pipeline` is a English model originally trained by shawhin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_phishing_classifier_teacher_pipeline_en_5.5.0_3.0_1725826212880.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_phishing_classifier_teacher_pipeline_en_5.5.0_3.0_1725826212880.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_phishing_classifier_teacher_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_phishing_classifier_teacher_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_phishing_classifier_teacher_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/shawhin/bert-phishing-classifier_teacher + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-bert_sentiment_analysis_kwang123_en.md b/docs/_posts/ahmedlone127/2024-09-08-bert_sentiment_analysis_kwang123_en.md new file mode 100644 index 00000000000000..d86ce7009dd5e7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-bert_sentiment_analysis_kwang123_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bert_sentiment_analysis_kwang123 BertForSequenceClassification from kwang123 +author: John Snow Labs +name: bert_sentiment_analysis_kwang123 +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_sentiment_analysis_kwang123` is a English model originally trained by kwang123. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_sentiment_analysis_kwang123_en_5.5.0_3.0_1725761539251.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_sentiment_analysis_kwang123_en_5.5.0_3.0_1725761539251.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("bert_sentiment_analysis_kwang123","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("bert_sentiment_analysis_kwang123", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_sentiment_analysis_kwang123| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/kwang123/bert-sentiment-analysis \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-bert_southern_sotho_qa_multi_qa_mpnet_base_dot_v1_game_183_en.md b/docs/_posts/ahmedlone127/2024-09-08-bert_southern_sotho_qa_multi_qa_mpnet_base_dot_v1_game_183_en.md new file mode 100644 index 00000000000000..c992ccc9df734f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-bert_southern_sotho_qa_multi_qa_mpnet_base_dot_v1_game_183_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English bert_southern_sotho_qa_multi_qa_mpnet_base_dot_v1_game_183 MPNetEmbeddings from abhijitt +author: John Snow Labs +name: bert_southern_sotho_qa_multi_qa_mpnet_base_dot_v1_game_183 +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_southern_sotho_qa_multi_qa_mpnet_base_dot_v1_game_183` is a English model originally trained by abhijitt. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_southern_sotho_qa_multi_qa_mpnet_base_dot_v1_game_183_en_5.5.0_3.0_1725817505279.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_southern_sotho_qa_multi_qa_mpnet_base_dot_v1_game_183_en_5.5.0_3.0_1725817505279.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("bert_southern_sotho_qa_multi_qa_mpnet_base_dot_v1_game_183","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("bert_southern_sotho_qa_multi_qa_mpnet_base_dot_v1_game_183","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_southern_sotho_qa_multi_qa_mpnet_base_dot_v1_game_183| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.0 MB| + +## References + +https://huggingface.co/abhijitt/bert_st_qa_multi-qa-mpnet-base-dot-v1_game_183 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-bertin_roberta_base_spanish_spanish_suicide_intent_information_v2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-bertin_roberta_base_spanish_spanish_suicide_intent_information_v2_pipeline_en.md new file mode 100644 index 00000000000000..28855a57e77f15 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-bertin_roberta_base_spanish_spanish_suicide_intent_information_v2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bertin_roberta_base_spanish_spanish_suicide_intent_information_v2_pipeline pipeline RoBertaForSequenceClassification from PrevenIA +author: John Snow Labs +name: bertin_roberta_base_spanish_spanish_suicide_intent_information_v2_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bertin_roberta_base_spanish_spanish_suicide_intent_information_v2_pipeline` is a English model originally trained by PrevenIA. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bertin_roberta_base_spanish_spanish_suicide_intent_information_v2_pipeline_en_5.5.0_3.0_1725830884551.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bertin_roberta_base_spanish_spanish_suicide_intent_information_v2_pipeline_en_5.5.0_3.0_1725830884551.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bertin_roberta_base_spanish_spanish_suicide_intent_information_v2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bertin_roberta_base_spanish_spanish_suicide_intent_information_v2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bertin_roberta_base_spanish_spanish_suicide_intent_information_v2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|464.5 MB| + +## References + +https://huggingface.co/PrevenIA/bertin-roberta-base-spanish-spanish-suicide-intent-information-v2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-bertin_roberta_fine_tuned_text_classification_slovene_data_augmentation_ds_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-bertin_roberta_fine_tuned_text_classification_slovene_data_augmentation_ds_pipeline_en.md new file mode 100644 index 00000000000000..59fe8e788c0072 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-bertin_roberta_fine_tuned_text_classification_slovene_data_augmentation_ds_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bertin_roberta_fine_tuned_text_classification_slovene_data_augmentation_ds_pipeline pipeline RoBertaForSequenceClassification from Sleoruiz +author: John Snow Labs +name: bertin_roberta_fine_tuned_text_classification_slovene_data_augmentation_ds_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bertin_roberta_fine_tuned_text_classification_slovene_data_augmentation_ds_pipeline` is a English model originally trained by Sleoruiz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bertin_roberta_fine_tuned_text_classification_slovene_data_augmentation_ds_pipeline_en_5.5.0_3.0_1725778742173.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bertin_roberta_fine_tuned_text_classification_slovene_data_augmentation_ds_pipeline_en_5.5.0_3.0_1725778742173.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bertin_roberta_fine_tuned_text_classification_slovene_data_augmentation_ds_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bertin_roberta_fine_tuned_text_classification_slovene_data_augmentation_ds_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bertin_roberta_fine_tuned_text_classification_slovene_data_augmentation_ds_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|464.7 MB| + +## References + +https://huggingface.co/Sleoruiz/bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-ds + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-biolinkbert_base_michiyasunaga_en.md b/docs/_posts/ahmedlone127/2024-09-08-biolinkbert_base_michiyasunaga_en.md new file mode 100644 index 00000000000000..6a78fb5cec3c58 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-biolinkbert_base_michiyasunaga_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English biolinkbert_base_michiyasunaga BertForSequenceClassification from michiyasunaga +author: John Snow Labs +name: biolinkbert_base_michiyasunaga +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`biolinkbert_base_michiyasunaga` is a English model originally trained by michiyasunaga. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/biolinkbert_base_michiyasunaga_en_5.5.0_3.0_1725761226797.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/biolinkbert_base_michiyasunaga_en_5.5.0_3.0_1725761226797.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("biolinkbert_base_michiyasunaga","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("biolinkbert_base_michiyasunaga", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|biolinkbert_base_michiyasunaga| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|405.7 MB| + +## References + +https://huggingface.co/michiyasunaga/BioLinkBERT-base \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-bleurt_base_128_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-bleurt_base_128_pipeline_en.md new file mode 100644 index 00000000000000..75bfb435aa5e7a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-bleurt_base_128_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bleurt_base_128_pipeline pipeline BertForSequenceClassification from Elron +author: John Snow Labs +name: bleurt_base_128_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bleurt_base_128_pipeline` is a English model originally trained by Elron. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bleurt_base_128_pipeline_en_5.5.0_3.0_1725768152215.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bleurt_base_128_pipeline_en_5.5.0_3.0_1725768152215.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bleurt_base_128_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bleurt_base_128_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bleurt_base_128_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|409.5 MB| + +## References + +https://huggingface.co/Elron/bleurt-base-128 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_qa_model_nkey1_en.md b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_qa_model_nkey1_en.md new file mode 100644 index 00000000000000..f5c52ee65cd9ae --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_qa_model_nkey1_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_nkey1 DistilBertForQuestionAnswering from nkey1 +author: John Snow Labs +name: burmese_awesome_qa_model_nkey1 +date: 2024-09-08 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_nkey1` is a English model originally trained by nkey1. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_nkey1_en_5.5.0_3.0_1725823198349.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_nkey1_en_5.5.0_3.0_1725823198349.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_nkey1","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_nkey1", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_nkey1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/nkey1/my_awesome_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_qa_model_zhandsome_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_qa_model_zhandsome_pipeline_en.md new file mode 100644 index 00000000000000..101c1bf4df6d78 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_qa_model_zhandsome_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_awesome_qa_model_zhandsome_pipeline pipeline DistilBertForQuestionAnswering from zhandsome +author: John Snow Labs +name: burmese_awesome_qa_model_zhandsome_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_zhandsome_pipeline` is a English model originally trained by zhandsome. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_zhandsome_pipeline_en_5.5.0_3.0_1725823516461.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_zhandsome_pipeline_en_5.5.0_3.0_1725823516461.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_qa_model_zhandsome_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_qa_model_zhandsome_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_zhandsome_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/zhandsome/my_awesome_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_setfit_model_2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_setfit_model_2_pipeline_en.md new file mode 100644 index 00000000000000..63433a16a9f701 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_setfit_model_2_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_awesome_setfit_model_2_pipeline pipeline MPNetEmbeddings from lewtun +author: John Snow Labs +name: burmese_awesome_setfit_model_2_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_setfit_model_2_pipeline` is a English model originally trained by lewtun. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_setfit_model_2_pipeline_en_5.5.0_3.0_1725769212557.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_setfit_model_2_pipeline_en_5.5.0_3.0_1725769212557.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_setfit_model_2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_setfit_model_2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_setfit_model_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/lewtun/my-awesome-setfit-model-2 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_all_jaoa_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_all_jaoa_pipeline_en.md new file mode 100644 index 00000000000000..42332a28156127 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_all_jaoa_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_awesome_wnut_all_jaoa_pipeline pipeline DistilBertForTokenClassification from gonzalezrostani +author: John Snow Labs +name: burmese_awesome_wnut_all_jaoa_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_all_jaoa_pipeline` is a English model originally trained by gonzalezrostani. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_all_jaoa_pipeline_en_5.5.0_3.0_1725789122829.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_all_jaoa_pipeline_en_5.5.0_3.0_1725789122829.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_wnut_all_jaoa_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_wnut_all_jaoa_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_all_jaoa_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/gonzalezrostani/my_awesome_wnut_all_JAOa + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_jaoa_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_jaoa_pipeline_en.md new file mode 100644 index 00000000000000..b081a41d2c564c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_jaoa_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_awesome_wnut_jaoa_pipeline pipeline DistilBertForTokenClassification from gonzalezrostani +author: John Snow Labs +name: burmese_awesome_wnut_jaoa_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_jaoa_pipeline` is a English model originally trained by gonzalezrostani. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_jaoa_pipeline_en_5.5.0_3.0_1725837616133.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_jaoa_pipeline_en_5.5.0_3.0_1725837616133.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_wnut_jaoa_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_wnut_jaoa_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_jaoa_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/gonzalezrostani/my_awesome_wnut_JAOa + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_jquanti_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_jquanti_pipeline_en.md new file mode 100644 index 00000000000000..65efc781a49899 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_jquanti_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_awesome_wnut_jquanti_pipeline pipeline DistilBertForTokenClassification from gonzalezrostani +author: John Snow Labs +name: burmese_awesome_wnut_jquanti_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_jquanti_pipeline` is a English model originally trained by gonzalezrostani. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_jquanti_pipeline_en_5.5.0_3.0_1725828006007.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_jquanti_pipeline_en_5.5.0_3.0_1725828006007.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_wnut_jquanti_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_wnut_jquanti_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_jquanti_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/gonzalezrostani/my_awesome_wnut_JQuanti + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_model_alexandryte6_en.md b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_model_alexandryte6_en.md new file mode 100644 index 00000000000000..2a5e37f1c5bf50 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_model_alexandryte6_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_awesome_wnut_model_alexandryte6 DistilBertForTokenClassification from alexandryte6 +author: John Snow Labs +name: burmese_awesome_wnut_model_alexandryte6 +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_model_alexandryte6` is a English model originally trained by alexandryte6. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_alexandryte6_en_5.5.0_3.0_1725788859680.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_alexandryte6_en_5.5.0_3.0_1725788859680.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_wnut_model_alexandryte6","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_wnut_model_alexandryte6", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_model_alexandryte6| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/alexandryte6/my_awesome_wnut_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_model_massiaz_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_model_massiaz_pipeline_en.md new file mode 100644 index 00000000000000..1d9b60d8de5f02 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_model_massiaz_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_awesome_wnut_model_massiaz_pipeline pipeline DistilBertForTokenClassification from massiaz +author: John Snow Labs +name: burmese_awesome_wnut_model_massiaz_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_model_massiaz_pipeline` is a English model originally trained by massiaz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_massiaz_pipeline_en_5.5.0_3.0_1725788542977.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_massiaz_pipeline_en_5.5.0_3.0_1725788542977.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_wnut_model_massiaz_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_wnut_model_massiaz_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_model_massiaz_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/massiaz/my_awesome_wnut_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_model_saidileep1007_en.md b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_model_saidileep1007_en.md new file mode 100644 index 00000000000000..718290f7a5848a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-burmese_awesome_wnut_model_saidileep1007_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_awesome_wnut_model_saidileep1007 DistilBertForTokenClassification from saidileep1007 +author: John Snow Labs +name: burmese_awesome_wnut_model_saidileep1007 +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_wnut_model_saidileep1007` is a English model originally trained by saidileep1007. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_saidileep1007_en_5.5.0_3.0_1725788560299.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_wnut_model_saidileep1007_en_5.5.0_3.0_1725788560299.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_wnut_model_saidileep1007","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_awesome_wnut_model_saidileep1007", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_wnut_model_saidileep1007| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/saidileep1007/my_awesome_wnut_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-burmese_ner_model_hcy5561_en.md b/docs/_posts/ahmedlone127/2024-09-08-burmese_ner_model_hcy5561_en.md new file mode 100644 index 00000000000000..8870f47937e254 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-burmese_ner_model_hcy5561_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_ner_model_hcy5561 DistilBertForTokenClassification from hcy5561 +author: John Snow Labs +name: burmese_ner_model_hcy5561 +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_ner_model_hcy5561` is a English model originally trained by hcy5561. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_ner_model_hcy5561_en_5.5.0_3.0_1725788650814.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_ner_model_hcy5561_en_5.5.0_3.0_1725788650814.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_ner_model_hcy5561","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("burmese_ner_model_hcy5561", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_ner_model_hcy5561| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/hcy5561/my_ner_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-burmese_ner_model_hcy5561_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-burmese_ner_model_hcy5561_pipeline_en.md new file mode 100644 index 00000000000000..f49302aa1b48e6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-burmese_ner_model_hcy5561_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_ner_model_hcy5561_pipeline pipeline DistilBertForTokenClassification from hcy5561 +author: John Snow Labs +name: burmese_ner_model_hcy5561_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_ner_model_hcy5561_pipeline` is a English model originally trained by hcy5561. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_ner_model_hcy5561_pipeline_en_5.5.0_3.0_1725788662314.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_ner_model_hcy5561_pipeline_en_5.5.0_3.0_1725788662314.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_ner_model_hcy5561_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_ner_model_hcy5561_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_ner_model_hcy5561_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/hcy5561/my_ner_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-burmese_nmt_model2_ad_iiitd_en.md b/docs/_posts/ahmedlone127/2024-09-08-burmese_nmt_model2_ad_iiitd_en.md new file mode 100644 index 00000000000000..0d7e158734a6c5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-burmese_nmt_model2_ad_iiitd_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_nmt_model2_ad_iiitd MarianTransformer from AD-IIITD +author: John Snow Labs +name: burmese_nmt_model2_ad_iiitd +date: 2024-09-08 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_nmt_model2_ad_iiitd` is a English model originally trained by AD-IIITD. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_nmt_model2_ad_iiitd_en_5.5.0_3.0_1725765319241.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_nmt_model2_ad_iiitd_en_5.5.0_3.0_1725765319241.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("burmese_nmt_model2_ad_iiitd","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("burmese_nmt_model2_ad_iiitd","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_nmt_model2_ad_iiitd| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|500.3 MB| + +## References + +https://huggingface.co/AD-IIITD/my_NMT_model2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-clas_3_en.md b/docs/_posts/ahmedlone127/2024-09-08-clas_3_en.md new file mode 100644 index 00000000000000..33204c2dc907a2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-clas_3_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English clas_3 RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: clas_3 +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`clas_3` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/clas_3_en_5.5.0_3.0_1725821149879.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/clas_3_en_5.5.0_3.0_1725821149879.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("clas_3","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("clas_3", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|clas_3| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Clas_3 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-clasificador_languagedetection_rociourquijo_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-clasificador_languagedetection_rociourquijo_pipeline_en.md new file mode 100644 index 00000000000000..96366026b6303e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-clasificador_languagedetection_rociourquijo_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English clasificador_languagedetection_rociourquijo_pipeline pipeline XlmRoBertaForSequenceClassification from RocioUrquijo +author: John Snow Labs +name: clasificador_languagedetection_rociourquijo_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`clasificador_languagedetection_rociourquijo_pipeline` is a English model originally trained by RocioUrquijo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/clasificador_languagedetection_rociourquijo_pipeline_en_5.5.0_3.0_1725799340243.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/clasificador_languagedetection_rociourquijo_pipeline_en_5.5.0_3.0_1725799340243.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("clasificador_languagedetection_rociourquijo_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("clasificador_languagedetection_rociourquijo_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|clasificador_languagedetection_rociourquijo_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|870.4 MB| + +## References + +https://huggingface.co/RocioUrquijo/clasificador-languagedetection + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-clasificadorcorreosoportedistilespanol_en.md b/docs/_posts/ahmedlone127/2024-09-08-clasificadorcorreosoportedistilespanol_en.md new file mode 100644 index 00000000000000..0add0ccdf07f7f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-clasificadorcorreosoportedistilespanol_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English clasificadorcorreosoportedistilespanol DistilBertForSequenceClassification from Arodrigo +author: John Snow Labs +name: clasificadorcorreosoportedistilespanol +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`clasificadorcorreosoportedistilespanol` is a English model originally trained by Arodrigo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/clasificadorcorreosoportedistilespanol_en_5.5.0_3.0_1725776960016.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/clasificadorcorreosoportedistilespanol_en_5.5.0_3.0_1725776960016.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("clasificadorcorreosoportedistilespanol","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("clasificadorcorreosoportedistilespanol", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|clasificadorcorreosoportedistilespanol| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|252.4 MB| + +## References + +https://huggingface.co/Arodrigo/ClasificadorCorreoSoporteDistilEspanol \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-classifier_chapter4_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-classifier_chapter4_pipeline_en.md new file mode 100644 index 00000000000000..61b1e95642d7fb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-classifier_chapter4_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English classifier_chapter4_pipeline pipeline DistilBertForSequenceClassification from genaibook +author: John Snow Labs +name: classifier_chapter4_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`classifier_chapter4_pipeline` is a English model originally trained by genaibook. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/classifier_chapter4_pipeline_en_5.5.0_3.0_1725809078294.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/classifier_chapter4_pipeline_en_5.5.0_3.0_1725809078294.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("classifier_chapter4_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("classifier_chapter4_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|classifier_chapter4_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/genaibook/classifier-chapter4 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-cls_model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-cls_model_pipeline_en.md new file mode 100644 index 00000000000000..d89661bb3d1cc9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-cls_model_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English cls_model_pipeline pipeline MPNetEmbeddings from maneprajakta +author: John Snow Labs +name: cls_model_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cls_model_pipeline` is a English model originally trained by maneprajakta. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cls_model_pipeline_en_5.5.0_3.0_1725816863462.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cls_model_pipeline_en_5.5.0_3.0_1725816863462.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("cls_model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("cls_model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cls_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/maneprajakta/cls_model + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-cnec2_0_supertypes_distilbert_en.md b/docs/_posts/ahmedlone127/2024-09-08-cnec2_0_supertypes_distilbert_en.md new file mode 100644 index 00000000000000..298d8b58e6cfb8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-cnec2_0_supertypes_distilbert_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English cnec2_0_supertypes_distilbert DistilBertForTokenClassification from stulcrad +author: John Snow Labs +name: cnec2_0_supertypes_distilbert +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cnec2_0_supertypes_distilbert` is a English model originally trained by stulcrad. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cnec2_0_supertypes_distilbert_en_5.5.0_3.0_1725837289143.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cnec2_0_supertypes_distilbert_en_5.5.0_3.0_1725837289143.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("cnec2_0_supertypes_distilbert","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("cnec2_0_supertypes_distilbert", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cnec2_0_supertypes_distilbert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|505.5 MB| + +## References + +https://huggingface.co/stulcrad/CNEC2_0_Supertypes_distilbert \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-cot_5_en.md b/docs/_posts/ahmedlone127/2024-09-08-cot_5_en.md new file mode 100644 index 00000000000000..3982faf5c7ce66 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-cot_5_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English cot_5 MPNetEmbeddings from ingeol +author: John Snow Labs +name: cot_5 +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cot_5` is a English model originally trained by ingeol. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cot_5_en_5.5.0_3.0_1725815800571.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cot_5_en_5.5.0_3.0_1725815800571.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("cot_5","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("cot_5","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cot_5| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/ingeol/cot_5 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-cot_5_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-cot_5_pipeline_en.md new file mode 100644 index 00000000000000..e552955891721c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-cot_5_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English cot_5_pipeline pipeline MPNetEmbeddings from ingeol +author: John Snow Labs +name: cot_5_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cot_5_pipeline` is a English model originally trained by ingeol. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cot_5_pipeline_en_5.5.0_3.0_1725815823913.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cot_5_pipeline_en_5.5.0_3.0_1725815823913.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("cot_5_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("cot_5_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cot_5_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/ingeol/cot_5 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-covid_tweet_sentiment_analysis_roberta_model_en.md b/docs/_posts/ahmedlone127/2024-09-08-covid_tweet_sentiment_analysis_roberta_model_en.md new file mode 100644 index 00000000000000..c0b38d443b8458 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-covid_tweet_sentiment_analysis_roberta_model_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English covid_tweet_sentiment_analysis_roberta_model RoBertaForSequenceClassification from Eva-Gaga +author: John Snow Labs +name: covid_tweet_sentiment_analysis_roberta_model +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`covid_tweet_sentiment_analysis_roberta_model` is a English model originally trained by Eva-Gaga. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/covid_tweet_sentiment_analysis_roberta_model_en_5.5.0_3.0_1725820904790.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/covid_tweet_sentiment_analysis_roberta_model_en_5.5.0_3.0_1725820904790.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("covid_tweet_sentiment_analysis_roberta_model","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("covid_tweet_sentiment_analysis_roberta_model", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|covid_tweet_sentiment_analysis_roberta_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.1 MB| + +## References + +https://huggingface.co/Eva-Gaga/covid-tweet-sentiment-analysis-roberta_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-cpegen_vend_en.md b/docs/_posts/ahmedlone127/2024-09-08-cpegen_vend_en.md new file mode 100644 index 00000000000000..1d2ba125f5d72b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-cpegen_vend_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English cpegen_vend DistilBertForTokenClassification from Neurona +author: John Snow Labs +name: cpegen_vend +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cpegen_vend` is a English model originally trained by Neurona. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cpegen_vend_en_5.5.0_3.0_1725837605386.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cpegen_vend_en_5.5.0_3.0_1725837605386.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("cpegen_vend","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("cpegen_vend", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cpegen_vend| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/Neurona/cpegen_vend \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-cross_all_bs160_allneg_finetuned_webnlg2020_data_coverage_en.md b/docs/_posts/ahmedlone127/2024-09-08-cross_all_bs160_allneg_finetuned_webnlg2020_data_coverage_en.md new file mode 100644 index 00000000000000..9cd3b4d12726c5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-cross_all_bs160_allneg_finetuned_webnlg2020_data_coverage_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English cross_all_bs160_allneg_finetuned_webnlg2020_data_coverage MPNetEmbeddings from teven +author: John Snow Labs +name: cross_all_bs160_allneg_finetuned_webnlg2020_data_coverage +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cross_all_bs160_allneg_finetuned_webnlg2020_data_coverage` is a English model originally trained by teven. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cross_all_bs160_allneg_finetuned_webnlg2020_data_coverage_en_5.5.0_3.0_1725769776205.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cross_all_bs160_allneg_finetuned_webnlg2020_data_coverage_en_5.5.0_3.0_1725769776205.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("cross_all_bs160_allneg_finetuned_webnlg2020_data_coverage","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("cross_all_bs160_allneg_finetuned_webnlg2020_data_coverage","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cross_all_bs160_allneg_finetuned_webnlg2020_data_coverage| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.3 MB| + +## References + +https://huggingface.co/teven/cross_all_bs160_allneg_finetuned_WebNLG2020_data_coverage \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-dbert_model_03_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-dbert_model_03_pipeline_en.md new file mode 100644 index 00000000000000..f7abb472a6b6b3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-dbert_model_03_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dbert_model_03_pipeline pipeline DistilBertForTokenClassification from fcfrank10 +author: John Snow Labs +name: dbert_model_03_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dbert_model_03_pipeline` is a English model originally trained by fcfrank10. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dbert_model_03_pipeline_en_5.5.0_3.0_1725827551610.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dbert_model_03_pipeline_en_5.5.0_3.0_1725827551610.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dbert_model_03_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dbert_model_03_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dbert_model_03_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|505.4 MB| + +## References + +https://huggingface.co/fcfrank10/dbert_model_03 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_imdb_v0_2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_imdb_v0_2_pipeline_en.md new file mode 100644 index 00000000000000..da502fc9f24da0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_imdb_v0_2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English deberta_v3_large_imdb_v0_2_pipeline pipeline DeBertaForSequenceClassification from dfurman +author: John Snow Labs +name: deberta_v3_large_imdb_v0_2_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deberta_v3_large_imdb_v0_2_pipeline` is a English model originally trained by dfurman. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_v3_large_imdb_v0_2_pipeline_en_5.5.0_3.0_1725812690341.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_v3_large_imdb_v0_2_pipeline_en_5.5.0_3.0_1725812690341.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("deberta_v3_large_imdb_v0_2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("deberta_v3_large_imdb_v0_2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deberta_v3_large_imdb_v0_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.6 GB| + +## References + +https://huggingface.co/dfurman/deberta-v3-large-imdb-v0.2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_llmmdlprefold_rank_20_09_2023_0_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_llmmdlprefold_rank_20_09_2023_0_pipeline_en.md new file mode 100644 index 00000000000000..fd95f9fa78ce3e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_llmmdlprefold_rank_20_09_2023_0_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English deberta_v3_large_llmmdlprefold_rank_20_09_2023_0_pipeline pipeline DeBertaForSequenceClassification from nagupv +author: John Snow Labs +name: deberta_v3_large_llmmdlprefold_rank_20_09_2023_0_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deberta_v3_large_llmmdlprefold_rank_20_09_2023_0_pipeline` is a English model originally trained by nagupv. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_v3_large_llmmdlprefold_rank_20_09_2023_0_pipeline_en_5.5.0_3.0_1725804934177.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_v3_large_llmmdlprefold_rank_20_09_2023_0_pipeline_en_5.5.0_3.0_1725804934177.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("deberta_v3_large_llmmdlprefold_rank_20_09_2023_0_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("deberta_v3_large_llmmdlprefold_rank_20_09_2023_0_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deberta_v3_large_llmmdlprefold_rank_20_09_2023_0_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.6 GB| + +## References + +https://huggingface.co/nagupv/deberta-v3-large_LLMMDLPREFOLD_RANK_20_09_2023_0 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_survey_cross_passage_consistency_rater_half_gpt4_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_survey_cross_passage_consistency_rater_half_gpt4_pipeline_en.md new file mode 100644 index 00000000000000..a6945a6a1804a0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_survey_cross_passage_consistency_rater_half_gpt4_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English deberta_v3_large_survey_cross_passage_consistency_rater_half_gpt4_pipeline pipeline DeBertaForSequenceClassification from domenicrosati +author: John Snow Labs +name: deberta_v3_large_survey_cross_passage_consistency_rater_half_gpt4_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deberta_v3_large_survey_cross_passage_consistency_rater_half_gpt4_pipeline` is a English model originally trained by domenicrosati. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_v3_large_survey_cross_passage_consistency_rater_half_gpt4_pipeline_en_5.5.0_3.0_1725812921199.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_v3_large_survey_cross_passage_consistency_rater_half_gpt4_pipeline_en_5.5.0_3.0_1725812921199.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("deberta_v3_large_survey_cross_passage_consistency_rater_half_gpt4_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("deberta_v3_large_survey_cross_passage_consistency_rater_half_gpt4_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deberta_v3_large_survey_cross_passage_consistency_rater_half_gpt4_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.5 GB| + +## References + +https://huggingface.co/domenicrosati/deberta-v3-large-survey-cross_passage_consistency-rater-half-gpt4 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_survey_main_passage_consistency_rater_all_gpt4_en.md b/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_survey_main_passage_consistency_rater_all_gpt4_en.md new file mode 100644 index 00000000000000..ae4fffdc23f316 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_survey_main_passage_consistency_rater_all_gpt4_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English deberta_v3_large_survey_main_passage_consistency_rater_all_gpt4 DeBertaForSequenceClassification from domenicrosati +author: John Snow Labs +name: deberta_v3_large_survey_main_passage_consistency_rater_all_gpt4 +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, deberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deberta_v3_large_survey_main_passage_consistency_rater_all_gpt4` is a English model originally trained by domenicrosati. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_v3_large_survey_main_passage_consistency_rater_all_gpt4_en_5.5.0_3.0_1725812727887.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_v3_large_survey_main_passage_consistency_rater_all_gpt4_en_5.5.0_3.0_1725812727887.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DeBertaForSequenceClassification.pretrained("deberta_v3_large_survey_main_passage_consistency_rater_all_gpt4","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DeBertaForSequenceClassification.pretrained("deberta_v3_large_survey_main_passage_consistency_rater_all_gpt4", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deberta_v3_large_survey_main_passage_consistency_rater_all_gpt4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.5 GB| + +## References + +https://huggingface.co/domenicrosati/deberta-v3-large-survey-main_passage_consistency-rater-all-gpt4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_survey_topicality_rater_half_gpt4_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_survey_topicality_rater_half_gpt4_pipeline_en.md new file mode 100644 index 00000000000000..5b736d3d66d071 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-deberta_v3_large_survey_topicality_rater_half_gpt4_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English deberta_v3_large_survey_topicality_rater_half_gpt4_pipeline pipeline DeBertaForSequenceClassification from domenicrosati +author: John Snow Labs +name: deberta_v3_large_survey_topicality_rater_half_gpt4_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deberta_v3_large_survey_topicality_rater_half_gpt4_pipeline` is a English model originally trained by domenicrosati. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_v3_large_survey_topicality_rater_half_gpt4_pipeline_en_5.5.0_3.0_1725812232761.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_v3_large_survey_topicality_rater_half_gpt4_pipeline_en_5.5.0_3.0_1725812232761.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("deberta_v3_large_survey_topicality_rater_half_gpt4_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("deberta_v3_large_survey_topicality_rater_half_gpt4_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deberta_v3_large_survey_topicality_rater_half_gpt4_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.5 GB| + +## References + +https://huggingface.co/domenicrosati/deberta-v3-large-survey-topicality-rater-half-gpt4 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distelbert_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distelbert_pipeline_en.md new file mode 100644 index 00000000000000..b39b63d0a23d15 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distelbert_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distelbert_pipeline pipeline DistilBertForQuestionAnswering from juman48 +author: John Snow Labs +name: distelbert_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distelbert_pipeline` is a English model originally trained by juman48. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distelbert_pipeline_en_5.5.0_3.0_1725823106512.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distelbert_pipeline_en_5.5.0_3.0_1725823106512.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distelbert_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distelbert_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distelbert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|243.8 MB| + +## References + +https://huggingface.co/juman48/Distelbert + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distil_bert_fintuned_issues_cfpb_complaints_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distil_bert_fintuned_issues_cfpb_complaints_pipeline_en.md new file mode 100644 index 00000000000000..f0682ba59ad098 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distil_bert_fintuned_issues_cfpb_complaints_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distil_bert_fintuned_issues_cfpb_complaints_pipeline pipeline DistilBertForSequenceClassification from Mahesh9 +author: John Snow Labs +name: distil_bert_fintuned_issues_cfpb_complaints_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distil_bert_fintuned_issues_cfpb_complaints_pipeline` is a English model originally trained by Mahesh9. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distil_bert_fintuned_issues_cfpb_complaints_pipeline_en_5.5.0_3.0_1725809220617.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distil_bert_fintuned_issues_cfpb_complaints_pipeline_en_5.5.0_3.0_1725809220617.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distil_bert_fintuned_issues_cfpb_complaints_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distil_bert_fintuned_issues_cfpb_complaints_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distil_bert_fintuned_issues_cfpb_complaints_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/Mahesh9/distil-bert-fintuned-issues-cfpb-complaints + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_cased_distilbert_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_cased_distilbert_pipeline_en.md new file mode 100644 index 00000000000000..666b2f55dfc370 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_cased_distilbert_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_cased_distilbert_pipeline pipeline DistilBertEmbeddings from distilbert +author: John Snow Labs +name: distilbert_base_cased_distilbert_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_cased_distilbert_pipeline` is a English model originally trained by distilbert. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_cased_distilbert_pipeline_en_5.5.0_3.0_1725776538256.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_cased_distilbert_pipeline_en_5.5.0_3.0_1725776538256.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_cased_distilbert_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_cased_distilbert_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_cased_distilbert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|243.8 MB| + +## References + +https://huggingface.co/distilbert/distilbert-base-cased + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_andr830g_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_andr830g_en.md new file mode 100644 index 00000000000000..199977cc68fe01 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_andr830g_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_emotion_andr830g DistilBertForSequenceClassification from andr830g +author: John Snow Labs +name: distilbert_base_uncased_finetuned_emotion_andr830g +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_emotion_andr830g` is a English model originally trained by andr830g. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_andr830g_en_5.5.0_3.0_1725808718176.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_andr830g_en_5.5.0_3.0_1725808718176.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_base_uncased_finetuned_emotion_andr830g","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_base_uncased_finetuned_emotion_andr830g", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_emotion_andr830g| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/andr830g/distilbert-base-uncased-finetuned-emotion \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_arulkumar03_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_arulkumar03_pipeline_en.md new file mode 100644 index 00000000000000..6a0b248c7b625d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_arulkumar03_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_emotion_arulkumar03_pipeline pipeline DistilBertForSequenceClassification from Arulkumar03 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_emotion_arulkumar03_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_emotion_arulkumar03_pipeline` is a English model originally trained by Arulkumar03. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_arulkumar03_pipeline_en_5.5.0_3.0_1725809093297.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_arulkumar03_pipeline_en_5.5.0_3.0_1725809093297.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_emotion_arulkumar03_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_emotion_arulkumar03_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_emotion_arulkumar03_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/Arulkumar03/distilbert-base-uncased-finetuned-emotion + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_lilvoda_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_lilvoda_pipeline_en.md new file mode 100644 index 00000000000000..6a968c97da67c6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_lilvoda_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_emotion_lilvoda_pipeline pipeline DistilBertForSequenceClassification from lilvoda +author: John Snow Labs +name: distilbert_base_uncased_finetuned_emotion_lilvoda_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_emotion_lilvoda_pipeline` is a English model originally trained by lilvoda. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_lilvoda_pipeline_en_5.5.0_3.0_1725775259363.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_lilvoda_pipeline_en_5.5.0_3.0_1725775259363.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_emotion_lilvoda_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_emotion_lilvoda_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_emotion_lilvoda_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/lilvoda/distilbert-base-uncased-finetuned-emotion + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_piyushathawale_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_piyushathawale_en.md new file mode 100644 index 00000000000000..ef542661018131 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_piyushathawale_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_emotion_piyushathawale DistilBertForSequenceClassification from piyushathawale +author: John Snow Labs +name: distilbert_base_uncased_finetuned_emotion_piyushathawale +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_emotion_piyushathawale` is a English model originally trained by piyushathawale. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_piyushathawale_en_5.5.0_3.0_1725808821442.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_piyushathawale_en_5.5.0_3.0_1725808821442.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_base_uncased_finetuned_emotion_piyushathawale","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_base_uncased_finetuned_emotion_piyushathawale", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_emotion_piyushathawale| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/piyushathawale/distilbert-base-uncased-finetuned-emotion \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_piyushathawale_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_piyushathawale_pipeline_en.md new file mode 100644 index 00000000000000..f6838c061024e9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_piyushathawale_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_emotion_piyushathawale_pipeline pipeline DistilBertForSequenceClassification from piyushathawale +author: John Snow Labs +name: distilbert_base_uncased_finetuned_emotion_piyushathawale_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_emotion_piyushathawale_pipeline` is a English model originally trained by piyushathawale. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_piyushathawale_pipeline_en_5.5.0_3.0_1725808834674.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_piyushathawale_pipeline_en_5.5.0_3.0_1725808834674.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_emotion_piyushathawale_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_emotion_piyushathawale_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_emotion_piyushathawale_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/piyushathawale/distilbert-base-uncased-finetuned-emotion + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_taoyoung_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_taoyoung_en.md new file mode 100644 index 00000000000000..8c376bec552be0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_emotion_taoyoung_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_emotion_taoyoung DistilBertForSequenceClassification from taoyoung +author: John Snow Labs +name: distilbert_base_uncased_finetuned_emotion_taoyoung +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_emotion_taoyoung` is a English model originally trained by taoyoung. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_taoyoung_en_5.5.0_3.0_1725808504087.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_emotion_taoyoung_en_5.5.0_3.0_1725808504087.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_base_uncased_finetuned_emotion_taoyoung","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_base_uncased_finetuned_emotion_taoyoung", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_emotion_taoyoung| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/taoyoung/distilbert-base-uncased-finetuned-emotion \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_events_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_events_pipeline_en.md new file mode 100644 index 00000000000000..7f34a305df2ff9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_events_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_events_pipeline pipeline DistilBertForSequenceClassification from joedonino +author: John Snow Labs +name: distilbert_base_uncased_finetuned_events_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_events_pipeline` is a English model originally trained by joedonino. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_events_pipeline_en_5.5.0_3.0_1725808625782.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_events_pipeline_en_5.5.0_3.0_1725808625782.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_events_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_events_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_events_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/joedonino/distilbert-base-uncased-finetuned-events + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_accelerate_minshengchan_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_accelerate_minshengchan_pipeline_en.md new file mode 100644 index 00000000000000..702663eff2ae70 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_accelerate_minshengchan_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_accelerate_minshengchan_pipeline pipeline DistilBertEmbeddings from minshengchan +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_accelerate_minshengchan_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_accelerate_minshengchan_pipeline` is a English model originally trained by minshengchan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_minshengchan_pipeline_en_5.5.0_3.0_1725782237433.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_minshengchan_pipeline_en_5.5.0_3.0_1725782237433.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_accelerate_minshengchan_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_accelerate_minshengchan_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_accelerate_minshengchan_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/minshengchan/distilbert-base-uncased-finetuned-imdb-accelerate + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_accelerate_phantatbach_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_accelerate_phantatbach_pipeline_en.md new file mode 100644 index 00000000000000..740a362ec7d97f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_accelerate_phantatbach_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_accelerate_phantatbach_pipeline pipeline DistilBertEmbeddings from phantatbach +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_accelerate_phantatbach_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_accelerate_phantatbach_pipeline` is a English model originally trained by phantatbach. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_phantatbach_pipeline_en_5.5.0_3.0_1725782216552.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_phantatbach_pipeline_en_5.5.0_3.0_1725782216552.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_accelerate_phantatbach_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_accelerate_phantatbach_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_accelerate_phantatbach_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/phantatbach/distilbert-base-uncased-finetuned-imdb-accelerate + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_accelerate_xxxxxcz_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_accelerate_xxxxxcz_en.md new file mode 100644 index 00000000000000..44c6be27eceba4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_accelerate_xxxxxcz_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_accelerate_xxxxxcz DistilBertEmbeddings from xxxxxcz +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_accelerate_xxxxxcz +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_accelerate_xxxxxcz` is a English model originally trained by xxxxxcz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_xxxxxcz_en_5.5.0_3.0_1725828477254.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_xxxxxcz_en_5.5.0_3.0_1725828477254.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_accelerate_xxxxxcz","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_accelerate_xxxxxcz","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_accelerate_xxxxxcz| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/xxxxxcz/distilbert-base-uncased-finetuned-imdb-accelerate \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_adrien35_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_adrien35_en.md new file mode 100644 index 00000000000000..108d7b3ae0b7e6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_adrien35_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_adrien35 DistilBertEmbeddings from Adrien35 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_adrien35 +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_adrien35` is a English model originally trained by Adrien35. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_adrien35_en_5.5.0_3.0_1725776360564.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_adrien35_en_5.5.0_3.0_1725776360564.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_adrien35","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_adrien35","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_adrien35| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Adrien35/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_ellieburton_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_ellieburton_en.md new file mode 100644 index 00000000000000..6c419e2c515dca --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_ellieburton_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_ellieburton DistilBertEmbeddings from ellieburton +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_ellieburton +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_ellieburton` is a English model originally trained by ellieburton. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_ellieburton_en_5.5.0_3.0_1725782433605.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_ellieburton_en_5.5.0_3.0_1725782433605.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_ellieburton","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_ellieburton","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_ellieburton| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/ellieburton/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_jkv53_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_jkv53_pipeline_en.md new file mode 100644 index 00000000000000..bf6aade3117971 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_jkv53_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_jkv53_pipeline pipeline DistilBertEmbeddings from jkv53 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_jkv53_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_jkv53_pipeline` is a English model originally trained by jkv53. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_jkv53_pipeline_en_5.5.0_3.0_1725782791848.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_jkv53_pipeline_en_5.5.0_3.0_1725782791848.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_jkv53_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_jkv53_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_jkv53_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/jkv53/distilbert-base-uncased-finetuned-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_lifan_z_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_lifan_z_pipeline_en.md new file mode 100644 index 00000000000000..90f9d650c06516 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_lifan_z_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_lifan_z_pipeline pipeline DistilBertEmbeddings from Lifan-Z +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_lifan_z_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_lifan_z_pipeline` is a English model originally trained by Lifan-Z. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_lifan_z_pipeline_en_5.5.0_3.0_1725782587436.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_lifan_z_pipeline_en_5.5.0_3.0_1725782587436.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_lifan_z_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_lifan_z_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_lifan_z_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Lifan-Z/distilbert-base-uncased-finetuned-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_sabbasi_11_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_sabbasi_11_pipeline_en.md new file mode 100644 index 00000000000000..ea4b8833fe47a9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_sabbasi_11_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_sabbasi_11_pipeline pipeline DistilBertEmbeddings from Sabbasi-11 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_sabbasi_11_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_sabbasi_11_pipeline` is a English model originally trained by Sabbasi-11. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_sabbasi_11_pipeline_en_5.5.0_3.0_1725776142313.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_sabbasi_11_pipeline_en_5.5.0_3.0_1725776142313.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_sabbasi_11_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_sabbasi_11_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_sabbasi_11_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Sabbasi-11/distilbert-base-uncased-finetuned-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_skyimple_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_skyimple_pipeline_en.md new file mode 100644 index 00000000000000..fa0cac1323ebd4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_skyimple_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_skyimple_pipeline pipeline DistilBertEmbeddings from skyimple +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_skyimple_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_skyimple_pipeline` is a English model originally trained by skyimple. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_skyimple_pipeline_en_5.5.0_3.0_1725828926278.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_skyimple_pipeline_en_5.5.0_3.0_1725828926278.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_skyimple_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_skyimple_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_skyimple_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/skyimple/distilbert-base-uncased-finetuned-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_wwm_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_wwm_pipeline_en.md new file mode 100644 index 00000000000000..6cbcc4eca881b4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_imdb_wwm_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_wwm_pipeline pipeline DistilBertEmbeddings from chrischang80 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_wwm_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_wwm_pipeline` is a English model originally trained by chrischang80. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_wwm_pipeline_en_5.5.0_3.0_1725828786529.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_wwm_pipeline_en_5.5.0_3.0_1725828786529.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_wwm_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_wwm_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_wwm_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/chrischang80/DistilBert-base-uncased-finetuned-imdb-wwm + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_jd_eng_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_jd_eng_en.md new file mode 100644 index 00000000000000..af32c51c595cd2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_jd_eng_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_jd_eng DistilBertEmbeddings from aliekens +author: John Snow Labs +name: distilbert_base_uncased_finetuned_jd_eng +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_jd_eng` is a English model originally trained by aliekens. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_jd_eng_en_5.5.0_3.0_1725828582880.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_jd_eng_en_5.5.0_3.0_1725828582880.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_jd_eng","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_jd_eng","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_jd_eng| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/aliekens/distilbert-base-uncased-finetuned-jd-eng \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_ner_abritez_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_ner_abritez_en.md new file mode 100644 index 00000000000000..39fdb0669c8b0f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_ner_abritez_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_ner_abritez DistilBertForTokenClassification from abritez +author: John Snow Labs +name: distilbert_base_uncased_finetuned_ner_abritez +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_ner_abritez` is a English model originally trained by abritez. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_abritez_en_5.5.0_3.0_1725789044285.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_abritez_en_5.5.0_3.0_1725789044285.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_abritez","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_abritez", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_ner_abritez| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/abritez/distilbert-base-uncased-finetuned-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_ner_coffee3699_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_ner_coffee3699_en.md new file mode 100644 index 00000000000000..fa75bf87db2c97 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_ner_coffee3699_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_ner_coffee3699 DistilBertForTokenClassification from coffee3699 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_ner_coffee3699 +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_ner_coffee3699` is a English model originally trained by coffee3699. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_coffee3699_en_5.5.0_3.0_1725837700826.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_coffee3699_en_5.5.0_3.0_1725837700826.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_coffee3699","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_coffee3699", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_ner_coffee3699| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/coffee3699/distilbert-base-uncased-finetuned-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_ner_rishikasrinivas_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_ner_rishikasrinivas_en.md new file mode 100644 index 00000000000000..31966d4d15a4b5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_ner_rishikasrinivas_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_ner_rishikasrinivas DistilBertForTokenClassification from rishikasrinivas +author: John Snow Labs +name: distilbert_base_uncased_finetuned_ner_rishikasrinivas +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_ner_rishikasrinivas` is a English model originally trained by rishikasrinivas. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_rishikasrinivas_en_5.5.0_3.0_1725837296303.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_rishikasrinivas_en_5.5.0_3.0_1725837296303.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_rishikasrinivas","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_rishikasrinivas", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_ner_rishikasrinivas| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/rishikasrinivas/distilbert-base-uncased-finetuned-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_ner_spectrumcrovn_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_ner_spectrumcrovn_en.md new file mode 100644 index 00000000000000..db4deef33c756a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_finetuned_ner_spectrumcrovn_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_ner_spectrumcrovn DistilBertForTokenClassification from SpectrumCrovn +author: John Snow Labs +name: distilbert_base_uncased_finetuned_ner_spectrumcrovn +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_ner_spectrumcrovn` is a English model originally trained by SpectrumCrovn. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_spectrumcrovn_en_5.5.0_3.0_1725837580203.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ner_spectrumcrovn_en_5.5.0_3.0_1725837580203.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_spectrumcrovn","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_finetuned_ner_spectrumcrovn", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_ner_spectrumcrovn| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/SpectrumCrovn/distilbert-base-uncased-finetuned-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_issues_128_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_issues_128_en.md new file mode 100644 index 00000000000000..130dc86c0c815b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_issues_128_en.md @@ -0,0 +1,92 @@ +--- +layout: model +title: English distilbert_base_uncased_issues_128 DistilBertEmbeddings from Chrispfield +author: John Snow Labs +name: distilbert_base_uncased_issues_128 +date: 2024-09-08 +tags: [distilbert, en, open_source, fill_mask, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_issues_128` is a English model originally trained by Chrispfield. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_issues_128_en_5.5.0_3.0_1725776453332.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_issues_128_en_5.5.0_3.0_1725776453332.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +document_assembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("documents") + + +embeddings =DistilBertEmbeddings.pretrained("distilbert_base_uncased_issues_128","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([document_assembler, embeddings]) + +pipelineModel = pipeline.fit(data) + +pipelineDF = pipelineModel.transform(data) +``` +```scala +val document_assembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("embeddings") + +val embeddings = DistilBertEmbeddings + .pretrained("distilbert_base_uncased_issues_128", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(document_assembler, embeddings)) + +val pipelineModel = pipeline.fit(data) + +val pipelineDF = pipelineModel.transform(data) +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_issues_128| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +References + +https://huggingface.co/Chrispfield/distilbert-base-uncased-issues-128 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_odm_zphr_0st13sd_ut72ut1large13pfxnf_simsp400_clean200_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_odm_zphr_0st13sd_ut72ut1large13pfxnf_simsp400_clean200_en.md new file mode 100644 index 00000000000000..26c14b002afdea --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_base_uncased_odm_zphr_0st13sd_ut72ut1large13pfxnf_simsp400_clean200_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_odm_zphr_0st13sd_ut72ut1large13pfxnf_simsp400_clean200 DistilBertForSequenceClassification from tom192180 +author: John Snow Labs +name: distilbert_base_uncased_odm_zphr_0st13sd_ut72ut1large13pfxnf_simsp400_clean200 +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_odm_zphr_0st13sd_ut72ut1large13pfxnf_simsp400_clean200` is a English model originally trained by tom192180. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_odm_zphr_0st13sd_ut72ut1large13pfxnf_simsp400_clean200_en_5.5.0_3.0_1725777494330.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_odm_zphr_0st13sd_ut72ut1large13pfxnf_simsp400_clean200_en_5.5.0_3.0_1725777494330.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_base_uncased_odm_zphr_0st13sd_ut72ut1large13pfxnf_simsp400_clean200","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_base_uncased_odm_zphr_0st13sd_ut72ut1large13pfxnf_simsp400_clean200", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_odm_zphr_0st13sd_ut72ut1large13pfxnf_simsp400_clean200| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.6 MB| + +## References + +https://huggingface.co/tom192180/distilbert-base-uncased_odm_zphr_0st13sd_ut72ut1large13PfxNf_simsp400_clean200 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_qa_squad_english_german_spanish_model_pipeline_xx.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_qa_squad_english_german_spanish_model_pipeline_xx.md new file mode 100644 index 00000000000000..0e6d261c86ad83 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_qa_squad_english_german_spanish_model_pipeline_xx.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Multilingual distilbert_qa_squad_english_german_spanish_model_pipeline pipeline DistilBertForQuestionAnswering from ZYW +author: John Snow Labs +name: distilbert_qa_squad_english_german_spanish_model_pipeline +date: 2024-09-08 +tags: [xx, open_source, pipeline, onnx] +task: Question Answering +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_qa_squad_english_german_spanish_model_pipeline` is a Multilingual model originally trained by ZYW. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_qa_squad_english_german_spanish_model_pipeline_xx_5.5.0_3.0_1725818261757.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_qa_squad_english_german_spanish_model_pipeline_xx_5.5.0_3.0_1725818261757.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_qa_squad_english_german_spanish_model_pipeline", lang = "xx") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_qa_squad_english_german_spanish_model_pipeline", lang = "xx") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_qa_squad_english_german_spanish_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|xx| +|Size:|505.4 MB| + +## References + +https://huggingface.co/ZYW/squad-en-de-es-model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_sanskrit_saskta_glue_experiment_logit_kd_cola_384_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_sanskrit_saskta_glue_experiment_logit_kd_cola_384_en.md new file mode 100644 index 00000000000000..421adbcd73e97b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_sanskrit_saskta_glue_experiment_logit_kd_cola_384_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_sanskrit_saskta_glue_experiment_logit_kd_cola_384 DistilBertForSequenceClassification from gokuls +author: John Snow Labs +name: distilbert_sanskrit_saskta_glue_experiment_logit_kd_cola_384 +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_sanskrit_saskta_glue_experiment_logit_kd_cola_384` is a English model originally trained by gokuls. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_sanskrit_saskta_glue_experiment_logit_kd_cola_384_en_5.5.0_3.0_1725764567733.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_sanskrit_saskta_glue_experiment_logit_kd_cola_384_en_5.5.0_3.0_1725764567733.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_sanskrit_saskta_glue_experiment_logit_kd_cola_384","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_sanskrit_saskta_glue_experiment_logit_kd_cola_384", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_sanskrit_saskta_glue_experiment_logit_kd_cola_384| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|111.8 MB| + +## References + +https://huggingface.co/gokuls/distilbert_sa_GLUE_Experiment_logit_kd_cola_384 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_sst5_sentiment_analyzer_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_sst5_sentiment_analyzer_pipeline_en.md new file mode 100644 index 00000000000000..d58d464152892e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_sst5_sentiment_analyzer_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_sst5_sentiment_analyzer_pipeline pipeline DistilBertForSequenceClassification from jigarcpatel +author: John Snow Labs +name: distilbert_sst5_sentiment_analyzer_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_sst5_sentiment_analyzer_pipeline` is a English model originally trained by jigarcpatel. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_sst5_sentiment_analyzer_pipeline_en_5.5.0_3.0_1725775058676.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_sst5_sentiment_analyzer_pipeline_en_5.5.0_3.0_1725775058676.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_sst5_sentiment_analyzer_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_sst5_sentiment_analyzer_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_sst5_sentiment_analyzer_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/jigarcpatel/distilbert-sst5-sentiment-analyzer + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilbert_suicide_detection_hk_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilbert_suicide_detection_hk_en.md new file mode 100644 index 00000000000000..0eba5e454cfb9e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilbert_suicide_detection_hk_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_suicide_detection_hk DistilBertForSequenceClassification from wcyat +author: John Snow Labs +name: distilbert_suicide_detection_hk +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_suicide_detection_hk` is a English model originally trained by wcyat. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_suicide_detection_hk_en_5.5.0_3.0_1725775364452.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_suicide_detection_hk_en_5.5.0_3.0_1725775364452.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_suicide_detection_hk","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_suicide_detection_hk", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_suicide_detection_hk| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|507.6 MB| + +## References + +https://huggingface.co/wcyat/distilbert-suicide-detection-hk \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilfinetunebert_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilfinetunebert_en.md new file mode 100644 index 00000000000000..c05f338e21c71d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilfinetunebert_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilfinetunebert DistilBertForSequenceClassification from HRKhan +author: John Snow Labs +name: distilfinetunebert +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilfinetunebert` is a English model originally trained by HRKhan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilfinetunebert_en_5.5.0_3.0_1725808816509.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilfinetunebert_en_5.5.0_3.0_1725808816509.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilfinetunebert","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilfinetunebert", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilfinetunebert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/HRKhan/DistilFineTuneBert \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-distilfinetunebert_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-distilfinetunebert_pipeline_en.md new file mode 100644 index 00000000000000..a206f3a9c1ec27 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-distilfinetunebert_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilfinetunebert_pipeline pipeline DistilBertForSequenceClassification from HRKhan +author: John Snow Labs +name: distilfinetunebert_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilfinetunebert_pipeline` is a English model originally trained by HRKhan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilfinetunebert_pipeline_en_5.5.0_3.0_1725808829923.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilfinetunebert_pipeline_en_5.5.0_3.0_1725808829923.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilfinetunebert_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilfinetunebert_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilfinetunebert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/HRKhan/DistilFineTuneBert + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-dummy_model_bellylee_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_bellylee_pipeline_en.md new file mode 100644 index 00000000000000..29677ef067da69 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_bellylee_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_bellylee_pipeline pipeline CamemBertEmbeddings from bellylee +author: John Snow Labs +name: dummy_model_bellylee_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_bellylee_pipeline` is a English model originally trained by bellylee. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_bellylee_pipeline_en_5.5.0_3.0_1725786356914.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_bellylee_pipeline_en_5.5.0_3.0_1725786356914.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_bellylee_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_bellylee_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_bellylee_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/bellylee/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-dummy_model_ccyr119_en.md b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_ccyr119_en.md new file mode 100644 index 00000000000000..a2f6a8e785e66d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_ccyr119_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_ccyr119 CamemBertEmbeddings from ccyr119 +author: John Snow Labs +name: dummy_model_ccyr119 +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_ccyr119` is a English model originally trained by ccyr119. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_ccyr119_en_5.5.0_3.0_1725836608260.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_ccyr119_en_5.5.0_3.0_1725836608260.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_ccyr119","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_ccyr119","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_ccyr119| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/ccyr119/dummy_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-dummy_model_elusive_magnolia_en.md b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_elusive_magnolia_en.md new file mode 100644 index 00000000000000..58c30762137b56 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_elusive_magnolia_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_elusive_magnolia CamemBertEmbeddings from elusive-magnolia +author: John Snow Labs +name: dummy_model_elusive_magnolia +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_elusive_magnolia` is a English model originally trained by elusive-magnolia. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_elusive_magnolia_en_5.5.0_3.0_1725786867309.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_elusive_magnolia_en_5.5.0_3.0_1725786867309.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_elusive_magnolia","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_elusive_magnolia","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_elusive_magnolia| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/elusive-magnolia/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-dummy_model_itsramyah_en.md b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_itsramyah_en.md new file mode 100644 index 00000000000000..b724e8df23c01d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_itsramyah_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_itsramyah CamemBertEmbeddings from itsramyah +author: John Snow Labs +name: dummy_model_itsramyah +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_itsramyah` is a English model originally trained by itsramyah. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_itsramyah_en_5.5.0_3.0_1725836877641.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_itsramyah_en_5.5.0_3.0_1725836877641.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_itsramyah","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_itsramyah","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_itsramyah| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/itsramyah/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-dummy_model_lilywchen_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_lilywchen_pipeline_en.md new file mode 100644 index 00000000000000..38d7ed5f6b9acb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_lilywchen_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_lilywchen_pipeline pipeline CamemBertEmbeddings from lilywchen +author: John Snow Labs +name: dummy_model_lilywchen_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_lilywchen_pipeline` is a English model originally trained by lilywchen. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_lilywchen_pipeline_en_5.5.0_3.0_1725787026598.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_lilywchen_pipeline_en_5.5.0_3.0_1725787026598.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_lilywchen_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_lilywchen_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_lilywchen_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/lilywchen/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-dummy_model_mnslarcher_en.md b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_mnslarcher_en.md new file mode 100644 index 00000000000000..181430eaba7d9e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_mnslarcher_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_mnslarcher CamemBertEmbeddings from mnslarcher +author: John Snow Labs +name: dummy_model_mnslarcher +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_mnslarcher` is a English model originally trained by mnslarcher. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_mnslarcher_en_5.5.0_3.0_1725786743700.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_mnslarcher_en_5.5.0_3.0_1725786743700.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_mnslarcher","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_mnslarcher","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_mnslarcher| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/mnslarcher/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-dummy_model_reto55_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_reto55_pipeline_en.md new file mode 100644 index 00000000000000..ce9db6aa696ee1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_reto55_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_reto55_pipeline pipeline CamemBertEmbeddings from reto55 +author: John Snow Labs +name: dummy_model_reto55_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_reto55_pipeline` is a English model originally trained by reto55. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_reto55_pipeline_en_5.5.0_3.0_1725786681273.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_reto55_pipeline_en_5.5.0_3.0_1725786681273.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_reto55_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_reto55_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_reto55_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/reto55/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-dummy_model_tkoyama_en.md b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_tkoyama_en.md new file mode 100644 index 00000000000000..ea0559ecf14112 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_tkoyama_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_tkoyama CamemBertEmbeddings from tkoyama +author: John Snow Labs +name: dummy_model_tkoyama +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_tkoyama` is a English model originally trained by tkoyama. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_tkoyama_en_5.5.0_3.0_1725786650489.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_tkoyama_en_5.5.0_3.0_1725786650489.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_tkoyama","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_tkoyama","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_tkoyama| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/tkoyama/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-dummy_model_totoroeric_en.md b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_totoroeric_en.md new file mode 100644 index 00000000000000..2c0098bb874a96 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_totoroeric_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_totoroeric CamemBertEmbeddings from totoroeric +author: John Snow Labs +name: dummy_model_totoroeric +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_totoroeric` is a English model originally trained by totoroeric. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_totoroeric_en_5.5.0_3.0_1725836885548.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_totoroeric_en_5.5.0_3.0_1725836885548.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_totoroeric","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_totoroeric","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_totoroeric| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/totoroeric/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-dummy_model_yuwei2342_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_yuwei2342_pipeline_en.md new file mode 100644 index 00000000000000..a19178070e9fcf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-dummy_model_yuwei2342_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_yuwei2342_pipeline pipeline CamemBertEmbeddings from yuwei2342 +author: John Snow Labs +name: dummy_model_yuwei2342_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_yuwei2342_pipeline` is a English model originally trained by yuwei2342. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_yuwei2342_pipeline_en_5.5.0_3.0_1725836238279.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_yuwei2342_pipeline_en_5.5.0_3.0_1725836238279.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_yuwei2342_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_yuwei2342_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_yuwei2342_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/yuwei2342/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-ea_setfit_v1_classifier_en.md b/docs/_posts/ahmedlone127/2024-09-08-ea_setfit_v1_classifier_en.md new file mode 100644 index 00000000000000..bba1fa277f8248 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-ea_setfit_v1_classifier_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English ea_setfit_v1_classifier MPNetEmbeddings from czesty +author: John Snow Labs +name: ea_setfit_v1_classifier +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ea_setfit_v1_classifier` is a English model originally trained by czesty. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ea_setfit_v1_classifier_en_5.5.0_3.0_1725769500867.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ea_setfit_v1_classifier_en_5.5.0_3.0_1725769500867.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("ea_setfit_v1_classifier","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("ea_setfit_v1_classifier","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ea_setfit_v1_classifier| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/czesty/ea-setfit-v1-classifier \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-early_readmission_deberta_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-early_readmission_deberta_pipeline_en.md new file mode 100644 index 00000000000000..f1ce79bce8d745 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-early_readmission_deberta_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English early_readmission_deberta_pipeline pipeline DeBertaForSequenceClassification from austin +author: John Snow Labs +name: early_readmission_deberta_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`early_readmission_deberta_pipeline` is a English model originally trained by austin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/early_readmission_deberta_pipeline_en_5.5.0_3.0_1725804303054.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/early_readmission_deberta_pipeline_en_5.5.0_3.0_1725804303054.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("early_readmission_deberta_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("early_readmission_deberta_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|early_readmission_deberta_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|607.5 MB| + +## References + +https://huggingface.co/austin/early-readmission-deberta + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-embed_andegpt_h768_es.md b/docs/_posts/ahmedlone127/2024-09-08-embed_andegpt_h768_es.md new file mode 100644 index 00000000000000..e51b59657d37fb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-embed_andegpt_h768_es.md @@ -0,0 +1,86 @@ +--- +layout: model +title: Castilian, Spanish embed_andegpt_h768 MPNetEmbeddings from enpaiva +author: John Snow Labs +name: embed_andegpt_h768 +date: 2024-09-08 +tags: [es, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: es +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`embed_andegpt_h768` is a Castilian, Spanish model originally trained by enpaiva. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/embed_andegpt_h768_es_5.5.0_3.0_1725815126162.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/embed_andegpt_h768_es_5.5.0_3.0_1725815126162.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("embed_andegpt_h768","es") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("embed_andegpt_h768","es") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|embed_andegpt_h768| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|es| +|Size:|379.3 MB| + +## References + +https://huggingface.co/enpaiva/embed-andegpt-H768 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-english_tonga_tonga_islands_paiute_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-english_tonga_tonga_islands_paiute_pipeline_en.md new file mode 100644 index 00000000000000..74407e68758928 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-english_tonga_tonga_islands_paiute_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English english_tonga_tonga_islands_paiute_pipeline pipeline MarianTransformer from jcole333 +author: John Snow Labs +name: english_tonga_tonga_islands_paiute_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`english_tonga_tonga_islands_paiute_pipeline` is a English model originally trained by jcole333. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/english_tonga_tonga_islands_paiute_pipeline_en_5.5.0_3.0_1725824778121.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/english_tonga_tonga_islands_paiute_pipeline_en_5.5.0_3.0_1725824778121.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("english_tonga_tonga_islands_paiute_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("english_tonga_tonga_islands_paiute_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|english_tonga_tonga_islands_paiute_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|533.1 MB| + +## References + +https://huggingface.co/jcole333/en-to-paiute + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-f_x_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-f_x_pipeline_en.md new file mode 100644 index 00000000000000..b16d77be655686 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-f_x_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English f_x_pipeline pipeline BertForSequenceClassification from MoGP +author: John Snow Labs +name: f_x_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`f_x_pipeline` is a English model originally trained by MoGP. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/f_x_pipeline_en_5.5.0_3.0_1725825560355.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/f_x_pipeline_en_5.5.0_3.0_1725825560355.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("f_x_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("f_x_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|f_x_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/MoGP/f_x + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-facets2_re_10_10_en.md b/docs/_posts/ahmedlone127/2024-09-08-facets2_re_10_10_en.md new file mode 100644 index 00000000000000..d20c62854080e2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-facets2_re_10_10_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English facets2_re_10_10 MPNetEmbeddings from ingeol +author: John Snow Labs +name: facets2_re_10_10 +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`facets2_re_10_10` is a English model originally trained by ingeol. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/facets2_re_10_10_en_5.5.0_3.0_1725816717010.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/facets2_re_10_10_en_5.5.0_3.0_1725816717010.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("facets2_re_10_10","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("facets2_re_10_10","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|facets2_re_10_10| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/ingeol/facets2_re_10_10 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-facets_ep3_35_en.md b/docs/_posts/ahmedlone127/2024-09-08-facets_ep3_35_en.md new file mode 100644 index 00000000000000..6e716e0ca3047d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-facets_ep3_35_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English facets_ep3_35 MPNetEmbeddings from ingeol +author: John Snow Labs +name: facets_ep3_35 +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`facets_ep3_35` is a English model originally trained by ingeol. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/facets_ep3_35_en_5.5.0_3.0_1725769739214.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/facets_ep3_35_en_5.5.0_3.0_1725769739214.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("facets_ep3_35","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("facets_ep3_35","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|facets_ep3_35| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/ingeol/facets_ep3_35 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-final_model1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-final_model1_pipeline_en.md new file mode 100644 index 00000000000000..d6f8073d600b0f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-final_model1_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English final_model1_pipeline pipeline DistilBertForSequenceClassification from sachit56 +author: John Snow Labs +name: final_model1_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`final_model1_pipeline` is a English model originally trained by sachit56. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/final_model1_pipeline_en_5.5.0_3.0_1725774761630.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/final_model1_pipeline_en_5.5.0_3.0_1725774761630.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("final_model1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("final_model1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|final_model1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/sachit56/final_model1 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-finalassginment_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-finalassginment_pipeline_en.md new file mode 100644 index 00000000000000..86f980b3f352fb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-finalassginment_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English finalassginment_pipeline pipeline MarianTransformer from sanghyo +author: John Snow Labs +name: finalassginment_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finalassginment_pipeline` is a English model originally trained by sanghyo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finalassginment_pipeline_en_5.5.0_3.0_1725825238085.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finalassginment_pipeline_en_5.5.0_3.0_1725825238085.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finalassginment_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finalassginment_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finalassginment_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|541.1 MB| + +## References + +https://huggingface.co/sanghyo/FinalAssginment + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-fine_tuned_model_2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-fine_tuned_model_2_pipeline_en.md new file mode 100644 index 00000000000000..56dcaf74676263 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-fine_tuned_model_2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English fine_tuned_model_2_pipeline pipeline AlbertForSequenceClassification from KalaiselvanD +author: John Snow Labs +name: fine_tuned_model_2_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained AlbertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fine_tuned_model_2_pipeline` is a English model originally trained by KalaiselvanD. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fine_tuned_model_2_pipeline_en_5.5.0_3.0_1725767079403.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fine_tuned_model_2_pipeline_en_5.5.0_3.0_1725767079403.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("fine_tuned_model_2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("fine_tuned_model_2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fine_tuned_model_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|44.2 MB| + +## References + +https://huggingface.co/KalaiselvanD/fine_tuned_model_2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- AlbertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-finemodel_from_kde4_english_tonga_tonga_islands_chinese_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-finemodel_from_kde4_english_tonga_tonga_islands_chinese_pipeline_en.md new file mode 100644 index 00000000000000..2f11b7dcb1165b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-finemodel_from_kde4_english_tonga_tonga_islands_chinese_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English finemodel_from_kde4_english_tonga_tonga_islands_chinese_pipeline pipeline MarianTransformer from ArierMiao +author: John Snow Labs +name: finemodel_from_kde4_english_tonga_tonga_islands_chinese_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finemodel_from_kde4_english_tonga_tonga_islands_chinese_pipeline` is a English model originally trained by ArierMiao. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finemodel_from_kde4_english_tonga_tonga_islands_chinese_pipeline_en_5.5.0_3.0_1725824158724.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finemodel_from_kde4_english_tonga_tonga_islands_chinese_pipeline_en_5.5.0_3.0_1725824158724.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finemodel_from_kde4_english_tonga_tonga_islands_chinese_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finemodel_from_kde4_english_tonga_tonga_islands_chinese_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finemodel_from_kde4_english_tonga_tonga_islands_chinese_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|541.1 MB| + +## References + +https://huggingface.co/ArierMiao/finemodel-from-kde4-en-to-zh + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-finetuned_aihub_full_english_tonga_tonga_islands_korean_en.md b/docs/_posts/ahmedlone127/2024-09-08-finetuned_aihub_full_english_tonga_tonga_islands_korean_en.md new file mode 100644 index 00000000000000..d6c5b281c367f9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-finetuned_aihub_full_english_tonga_tonga_islands_korean_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English finetuned_aihub_full_english_tonga_tonga_islands_korean MarianTransformer from YoungBinLee +author: John Snow Labs +name: finetuned_aihub_full_english_tonga_tonga_islands_korean +date: 2024-09-08 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuned_aihub_full_english_tonga_tonga_islands_korean` is a English model originally trained by YoungBinLee. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuned_aihub_full_english_tonga_tonga_islands_korean_en_5.5.0_3.0_1725795425247.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuned_aihub_full_english_tonga_tonga_islands_korean_en_5.5.0_3.0_1725795425247.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("finetuned_aihub_full_english_tonga_tonga_islands_korean","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("finetuned_aihub_full_english_tonga_tonga_islands_korean","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuned_aihub_full_english_tonga_tonga_islands_korean| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|1.0 GB| + +## References + +https://huggingface.co/YoungBinLee/finetuned-aihub-full-en-to-ko \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-finetuned_mlm_accelerate_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-finetuned_mlm_accelerate_pipeline_en.md new file mode 100644 index 00000000000000..9cf9f96e5533fa --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-finetuned_mlm_accelerate_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English finetuned_mlm_accelerate_pipeline pipeline DistilBertEmbeddings from cxx5208 +author: John Snow Labs +name: finetuned_mlm_accelerate_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuned_mlm_accelerate_pipeline` is a English model originally trained by cxx5208. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuned_mlm_accelerate_pipeline_en_5.5.0_3.0_1725828906868.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuned_mlm_accelerate_pipeline_en_5.5.0_3.0_1725828906868.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finetuned_mlm_accelerate_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finetuned_mlm_accelerate_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuned_mlm_accelerate_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/cxx5208/Finetuned-MLM-accelerate + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-finetuned_ner_kolj4_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-finetuned_ner_kolj4_pipeline_en.md new file mode 100644 index 00000000000000..0d09e32d564178 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-finetuned_ner_kolj4_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English finetuned_ner_kolj4_pipeline pipeline DistilBertForTokenClassification from kolj4 +author: John Snow Labs +name: finetuned_ner_kolj4_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuned_ner_kolj4_pipeline` is a English model originally trained by kolj4. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuned_ner_kolj4_pipeline_en_5.5.0_3.0_1725788665506.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuned_ner_kolj4_pipeline_en_5.5.0_3.0_1725788665506.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finetuned_ner_kolj4_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finetuned_ner_kolj4_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuned_ner_kolj4_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/kolj4/finetuned-ner + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-finetuning_bm25_small_en.md b/docs/_posts/ahmedlone127/2024-09-08-finetuning_bm25_small_en.md new file mode 100644 index 00000000000000..424d884dfb9230 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-finetuning_bm25_small_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English finetuning_bm25_small MPNetEmbeddings from jhsmith +author: John Snow Labs +name: finetuning_bm25_small +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuning_bm25_small` is a English model originally trained by jhsmith. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuning_bm25_small_en_5.5.0_3.0_1725769503960.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuning_bm25_small_en_5.5.0_3.0_1725769503960.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("finetuning_bm25_small","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("finetuning_bm25_small","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuning_bm25_small| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.2 MB| + +## References + +https://huggingface.co/jhsmith/finetuning_bm25_small \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-finetuning_sentiment_model_3000_samples_nico10hahn17_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-finetuning_sentiment_model_3000_samples_nico10hahn17_pipeline_en.md new file mode 100644 index 00000000000000..4d70a28e86642b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-finetuning_sentiment_model_3000_samples_nico10hahn17_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English finetuning_sentiment_model_3000_samples_nico10hahn17_pipeline pipeline DistilBertForSequenceClassification from Nico10Hahn17 +author: John Snow Labs +name: finetuning_sentiment_model_3000_samples_nico10hahn17_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuning_sentiment_model_3000_samples_nico10hahn17_pipeline` is a English model originally trained by Nico10Hahn17. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuning_sentiment_model_3000_samples_nico10hahn17_pipeline_en_5.5.0_3.0_1725775263323.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuning_sentiment_model_3000_samples_nico10hahn17_pipeline_en_5.5.0_3.0_1725775263323.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finetuning_sentiment_model_3000_samples_nico10hahn17_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finetuning_sentiment_model_3000_samples_nico10hahn17_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuning_sentiment_model_3000_samples_nico10hahn17_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/Nico10Hahn17/finetuning-sentiment-model-3000-samples + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-finetuning_sentiment_model_3000_samples_saahil1801_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-finetuning_sentiment_model_3000_samples_saahil1801_pipeline_en.md new file mode 100644 index 00000000000000..6454698bd75c7d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-finetuning_sentiment_model_3000_samples_saahil1801_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English finetuning_sentiment_model_3000_samples_saahil1801_pipeline pipeline DistilBertForSequenceClassification from Saahil1801 +author: John Snow Labs +name: finetuning_sentiment_model_3000_samples_saahil1801_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuning_sentiment_model_3000_samples_saahil1801_pipeline` is a English model originally trained by Saahil1801. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuning_sentiment_model_3000_samples_saahil1801_pipeline_en_5.5.0_3.0_1725808617654.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuning_sentiment_model_3000_samples_saahil1801_pipeline_en_5.5.0_3.0_1725808617654.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finetuning_sentiment_model_3000_samples_saahil1801_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finetuning_sentiment_model_3000_samples_saahil1801_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuning_sentiment_model_3000_samples_saahil1801_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/Saahil1801/finetuning-sentiment-model-3000-samples + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-french_bm_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-french_bm_pipeline_en.md new file mode 100644 index 00000000000000..85a634625aa6ef --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-french_bm_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English french_bm_pipeline pipeline MarianTransformer from Ife +author: John Snow Labs +name: french_bm_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`french_bm_pipeline` is a English model originally trained by Ife. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/french_bm_pipeline_en_5.5.0_3.0_1725795045469.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/french_bm_pipeline_en_5.5.0_3.0_1725795045469.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("french_bm_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("french_bm_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|french_bm_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|507.9 MB| + +## References + +https://huggingface.co/Ife/FR-BM + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-from_classifier_v0_en.md b/docs/_posts/ahmedlone127/2024-09-08-from_classifier_v0_en.md new file mode 100644 index 00000000000000..7c741aaa1e264a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-from_classifier_v0_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English from_classifier_v0 MPNetEmbeddings from futuredatascience +author: John Snow Labs +name: from_classifier_v0 +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`from_classifier_v0` is a English model originally trained by futuredatascience. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/from_classifier_v0_en_5.5.0_3.0_1725769895608.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/from_classifier_v0_en_5.5.0_3.0_1725769895608.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("from_classifier_v0","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("from_classifier_v0","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|from_classifier_v0| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/futuredatascience/from-classifier-v0 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-hindi_roberta_ner_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-hindi_roberta_ner_pipeline_en.md new file mode 100644 index 00000000000000..baed9b1371911d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-hindi_roberta_ner_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English hindi_roberta_ner_pipeline pipeline XlmRoBertaForTokenClassification from mirfan899 +author: John Snow Labs +name: hindi_roberta_ner_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`hindi_roberta_ner_pipeline` is a English model originally trained by mirfan899. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/hindi_roberta_ner_pipeline_en_5.5.0_3.0_1725785292226.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/hindi_roberta_ner_pipeline_en_5.5.0_3.0_1725785292226.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("hindi_roberta_ner_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("hindi_roberta_ner_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|hindi_roberta_ner_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|796.0 MB| + +## References + +https://huggingface.co/mirfan899/hindi-roberta-ner + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-inde_4_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-inde_4_pipeline_en.md new file mode 100644 index 00000000000000..7ee41c19dc8bb9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-inde_4_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English inde_4_pipeline pipeline RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: inde_4_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`inde_4_pipeline` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/inde_4_pipeline_en_5.5.0_3.0_1725779009322.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/inde_4_pipeline_en_5.5.0_3.0_1725779009322.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("inde_4_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("inde_4_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|inde_4_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Inde_4 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-isy503_sentiment_analysis2_iamke_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-isy503_sentiment_analysis2_iamke_pipeline_en.md new file mode 100644 index 00000000000000..876e5b132900e2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-isy503_sentiment_analysis2_iamke_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English isy503_sentiment_analysis2_iamke_pipeline pipeline DistilBertForSequenceClassification from IamKE +author: John Snow Labs +name: isy503_sentiment_analysis2_iamke_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`isy503_sentiment_analysis2_iamke_pipeline` is a English model originally trained by IamKE. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/isy503_sentiment_analysis2_iamke_pipeline_en_5.5.0_3.0_1725775073670.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/isy503_sentiment_analysis2_iamke_pipeline_en_5.5.0_3.0_1725775073670.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("isy503_sentiment_analysis2_iamke_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("isy503_sentiment_analysis2_iamke_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|isy503_sentiment_analysis2_iamke_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/IamKE/ISY503-sentiment_analysis2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-iwslt17_marian_small_ctx4_cwd3_english_french_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-iwslt17_marian_small_ctx4_cwd3_english_french_pipeline_en.md new file mode 100644 index 00000000000000..0b0936d1314a53 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-iwslt17_marian_small_ctx4_cwd3_english_french_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English iwslt17_marian_small_ctx4_cwd3_english_french_pipeline pipeline MarianTransformer from context-mt +author: John Snow Labs +name: iwslt17_marian_small_ctx4_cwd3_english_french_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`iwslt17_marian_small_ctx4_cwd3_english_french_pipeline` is a English model originally trained by context-mt. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/iwslt17_marian_small_ctx4_cwd3_english_french_pipeline_en_5.5.0_3.0_1725824913773.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/iwslt17_marian_small_ctx4_cwd3_english_french_pipeline_en_5.5.0_3.0_1725824913773.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("iwslt17_marian_small_ctx4_cwd3_english_french_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("iwslt17_marian_small_ctx4_cwd3_english_french_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|iwslt17_marian_small_ctx4_cwd3_english_french_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|508.9 MB| + +## References + +https://huggingface.co/context-mt/iwslt17-marian-small-ctx4-cwd3-en-fr + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-jnlpba_clinicalbert_ner_en.md b/docs/_posts/ahmedlone127/2024-09-08-jnlpba_clinicalbert_ner_en.md new file mode 100644 index 00000000000000..979b0683118996 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-jnlpba_clinicalbert_ner_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English jnlpba_clinicalbert_ner DistilBertForTokenClassification from judithrosell +author: John Snow Labs +name: jnlpba_clinicalbert_ner +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`jnlpba_clinicalbert_ner` is a English model originally trained by judithrosell. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/jnlpba_clinicalbert_ner_en_5.5.0_3.0_1725837514774.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/jnlpba_clinicalbert_ner_en_5.5.0_3.0_1725837514774.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("jnlpba_clinicalbert_ner","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("jnlpba_clinicalbert_ner", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|jnlpba_clinicalbert_ner| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|505.4 MB| + +## References + +https://huggingface.co/judithrosell/JNLPBA_ClinicalBERT_NER \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-kirundi_english_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-kirundi_english_pipeline_en.md new file mode 100644 index 00000000000000..11affdbfbce7cf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-kirundi_english_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English kirundi_english_pipeline pipeline MarianTransformer from icep0ps +author: John Snow Labs +name: kirundi_english_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`kirundi_english_pipeline` is a English model originally trained by icep0ps. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/kirundi_english_pipeline_en_5.5.0_3.0_1725766205165.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/kirundi_english_pipeline_en_5.5.0_3.0_1725766205165.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("kirundi_english_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("kirundi_english_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|kirundi_english_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|211.0 MB| + +## References + +https://huggingface.co/icep0ps/rn-en + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-lab1_ran_en.md b/docs/_posts/ahmedlone127/2024-09-08-lab1_ran_en.md new file mode 100644 index 00000000000000..4e682dbdea5ed4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-lab1_ran_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English lab1_ran MarianTransformer from wrchen1 +author: John Snow Labs +name: lab1_ran +date: 2024-09-08 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lab1_ran` is a English model originally trained by wrchen1. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lab1_ran_en_5.5.0_3.0_1725824563754.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lab1_ran_en_5.5.0_3.0_1725824563754.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("lab1_ran","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("lab1_ran","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lab1_ran| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.4 MB| + +## References + +https://huggingface.co/wrchen1/lab1_ran \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-lab1_random_lailemon_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-lab1_random_lailemon_pipeline_en.md new file mode 100644 index 00000000000000..573b0a66eeef09 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-lab1_random_lailemon_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English lab1_random_lailemon_pipeline pipeline MarianTransformer from LaiLemon +author: John Snow Labs +name: lab1_random_lailemon_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lab1_random_lailemon_pipeline` is a English model originally trained by LaiLemon. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lab1_random_lailemon_pipeline_en_5.5.0_3.0_1725832222984.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lab1_random_lailemon_pipeline_en_5.5.0_3.0_1725832222984.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("lab1_random_lailemon_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("lab1_random_lailemon_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lab1_random_lailemon_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|510.3 MB| + +## References + +https://huggingface.co/LaiLemon/lab1_random + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-lab2_8bit_adam_reshphil_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-lab2_8bit_adam_reshphil_pipeline_en.md new file mode 100644 index 00000000000000..a0cf85cd1f3b12 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-lab2_8bit_adam_reshphil_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English lab2_8bit_adam_reshphil_pipeline pipeline MarianTransformer from Reshphil +author: John Snow Labs +name: lab2_8bit_adam_reshphil_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lab2_8bit_adam_reshphil_pipeline` is a English model originally trained by Reshphil. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lab2_8bit_adam_reshphil_pipeline_en_5.5.0_3.0_1725766288069.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lab2_8bit_adam_reshphil_pipeline_en_5.5.0_3.0_1725766288069.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("lab2_8bit_adam_reshphil_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("lab2_8bit_adam_reshphil_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lab2_8bit_adam_reshphil_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|508.8 MB| + +## References + +https://huggingface.co/Reshphil/lab2_8bit_adam + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-legalbert_for_rhetorical_role_labeling_en.md b/docs/_posts/ahmedlone127/2024-09-08-legalbert_for_rhetorical_role_labeling_en.md new file mode 100644 index 00000000000000..dc222c64a559c5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-legalbert_for_rhetorical_role_labeling_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English legalbert_for_rhetorical_role_labeling BertForSequenceClassification from engineersaloni159 +author: John Snow Labs +name: legalbert_for_rhetorical_role_labeling +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`legalbert_for_rhetorical_role_labeling` is a English model originally trained by engineersaloni159. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/legalbert_for_rhetorical_role_labeling_en_5.5.0_3.0_1725819758805.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/legalbert_for_rhetorical_role_labeling_en_5.5.0_3.0_1725819758805.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("legalbert_for_rhetorical_role_labeling","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("legalbert_for_rhetorical_role_labeling", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|legalbert_for_rhetorical_role_labeling| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.5 MB| + +## References + +https://huggingface.co/engineersaloni159/legalBERT_for_rhetorical_role_labeling \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-lenu_ewe_en.md b/docs/_posts/ahmedlone127/2024-09-08-lenu_ewe_en.md new file mode 100644 index 00000000000000..1d9b08bc7ebdd7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-lenu_ewe_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English lenu_ewe BertForSequenceClassification from Sociovestix +author: John Snow Labs +name: lenu_ewe +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lenu_ewe` is a English model originally trained by Sociovestix. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lenu_ewe_en_5.5.0_3.0_1725761238318.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lenu_ewe_en_5.5.0_3.0_1725761238318.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("lenu_ewe","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("lenu_ewe", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lenu_ewe| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|627.8 MB| + +## References + +https://huggingface.co/Sociovestix/lenu_EE \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-lenu_polish_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-lenu_polish_pipeline_en.md new file mode 100644 index 00000000000000..922e0865ed95b1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-lenu_polish_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English lenu_polish_pipeline pipeline BertForSequenceClassification from Sociovestix +author: John Snow Labs +name: lenu_polish_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lenu_polish_pipeline` is a English model originally trained by Sociovestix. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lenu_polish_pipeline_en_5.5.0_3.0_1725761132296.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lenu_polish_pipeline_en_5.5.0_3.0_1725761132296.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("lenu_polish_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("lenu_polish_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lenu_polish_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|495.9 MB| + +## References + +https://huggingface.co/Sociovestix/lenu_PL + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-lenu_us_catalan_en.md b/docs/_posts/ahmedlone127/2024-09-08-lenu_us_catalan_en.md new file mode 100644 index 00000000000000..1b70b47071dff3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-lenu_us_catalan_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English lenu_us_catalan BertForSequenceClassification from Sociovestix +author: John Snow Labs +name: lenu_us_catalan +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lenu_us_catalan` is a English model originally trained by Sociovestix. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lenu_us_catalan_en_5.5.0_3.0_1725768563678.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lenu_us_catalan_en_5.5.0_3.0_1725768563678.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("lenu_us_catalan","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("lenu_us_catalan", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lenu_us_catalan| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/Sociovestix/lenu_US-CA \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-lin_camembert_base_en.md b/docs/_posts/ahmedlone127/2024-09-08-lin_camembert_base_en.md new file mode 100644 index 00000000000000..e0fffd037e1c90 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-lin_camembert_base_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English lin_camembert_base CamemBertEmbeddings from linomurali +author: John Snow Labs +name: lin_camembert_base +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lin_camembert_base` is a English model originally trained by linomurali. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lin_camembert_base_en_5.5.0_3.0_1725786211565.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lin_camembert_base_en_5.5.0_3.0_1725786211565.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("lin_camembert_base","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("lin_camembert_base","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lin_camembert_base| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/linomurali/lin_camembert-base \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-local_politics_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-local_politics_pipeline_en.md new file mode 100644 index 00000000000000..bd375a57044eeb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-local_politics_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English local_politics_pipeline pipeline BertForSequenceClassification from nruigrok +author: John Snow Labs +name: local_politics_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`local_politics_pipeline` is a English model originally trained by nruigrok. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/local_politics_pipeline_en_5.5.0_3.0_1725761763682.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/local_politics_pipeline_en_5.5.0_3.0_1725761763682.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("local_politics_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("local_politics_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|local_politics_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|409.0 MB| + +## References + +https://huggingface.co/nruigrok/local_politics + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-lug_english_translation_2024_v1_blue_21_meteor_41_bert_92_modata_en.md b/docs/_posts/ahmedlone127/2024-09-08-lug_english_translation_2024_v1_blue_21_meteor_41_bert_92_modata_en.md new file mode 100644 index 00000000000000..c1e1f4a1c45771 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-lug_english_translation_2024_v1_blue_21_meteor_41_bert_92_modata_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English lug_english_translation_2024_v1_blue_21_meteor_41_bert_92_modata MarianTransformer from atwine +author: John Snow Labs +name: lug_english_translation_2024_v1_blue_21_meteor_41_bert_92_modata +date: 2024-09-08 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lug_english_translation_2024_v1_blue_21_meteor_41_bert_92_modata` is a English model originally trained by atwine. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lug_english_translation_2024_v1_blue_21_meteor_41_bert_92_modata_en_5.5.0_3.0_1725824531022.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lug_english_translation_2024_v1_blue_21_meteor_41_bert_92_modata_en_5.5.0_3.0_1725824531022.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("lug_english_translation_2024_v1_blue_21_meteor_41_bert_92_modata","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("lug_english_translation_2024_v1_blue_21_meteor_41_bert_92_modata","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lug_english_translation_2024_v1_blue_21_meteor_41_bert_92_modata| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|532.6 MB| + +## References + +https://huggingface.co/atwine/lug_en_translation_2024_v1_BLUE_21_METEOR_41_BERT_92_modata \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-macdonaldsentiment_model_en.md b/docs/_posts/ahmedlone127/2024-09-08-macdonaldsentiment_model_en.md new file mode 100644 index 00000000000000..b6677a632eee4a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-macdonaldsentiment_model_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English macdonaldsentiment_model DistilBertForSequenceClassification from Liuxuanxi +author: John Snow Labs +name: macdonaldsentiment_model +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`macdonaldsentiment_model` is a English model originally trained by Liuxuanxi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/macdonaldsentiment_model_en_5.5.0_3.0_1725775544881.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/macdonaldsentiment_model_en_5.5.0_3.0_1725775544881.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("macdonaldsentiment_model","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("macdonaldsentiment_model", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|macdonaldsentiment_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|507.6 MB| + +## References + +https://huggingface.co/Liuxuanxi/macdonaldsentiment_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-magpie_babe_ft_en.md b/docs/_posts/ahmedlone127/2024-09-08-magpie_babe_ft_en.md new file mode 100644 index 00000000000000..38b9ca5d7311cc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-magpie_babe_ft_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English magpie_babe_ft RoBertaForSequenceClassification from mediabiasgroup +author: John Snow Labs +name: magpie_babe_ft +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`magpie_babe_ft` is a English model originally trained by mediabiasgroup. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/magpie_babe_ft_en_5.5.0_3.0_1725779017091.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/magpie_babe_ft_en_5.5.0_3.0_1725779017091.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("magpie_babe_ft","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("magpie_babe_ft", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|magpie_babe_ft| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|467.8 MB| + +## References + +https://huggingface.co/mediabiasgroup/magpie-babe-ft \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-maltese_hitz_basque_spanish_pipeline_eu.md b/docs/_posts/ahmedlone127/2024-09-08-maltese_hitz_basque_spanish_pipeline_eu.md new file mode 100644 index 00000000000000..bdde0430cfd02d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-maltese_hitz_basque_spanish_pipeline_eu.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Basque maltese_hitz_basque_spanish_pipeline pipeline MarianTransformer from HiTZ +author: John Snow Labs +name: maltese_hitz_basque_spanish_pipeline +date: 2024-09-08 +tags: [eu, open_source, pipeline, onnx] +task: Translation +language: eu +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`maltese_hitz_basque_spanish_pipeline` is a Basque model originally trained by HiTZ. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/maltese_hitz_basque_spanish_pipeline_eu_5.5.0_3.0_1725831904999.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/maltese_hitz_basque_spanish_pipeline_eu_5.5.0_3.0_1725831904999.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("maltese_hitz_basque_spanish_pipeline", lang = "eu") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("maltese_hitz_basque_spanish_pipeline", lang = "eu") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|maltese_hitz_basque_spanish_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|eu| +|Size:|225.9 MB| + +## References + +https://huggingface.co/HiTZ/mt-hitz-eu-es + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-marabert22_model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-marabert22_model_pipeline_en.md new file mode 100644 index 00000000000000..75cd5978468d9c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-marabert22_model_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English marabert22_model_pipeline pipeline BertForSequenceClassification from aya2003 +author: John Snow Labs +name: marabert22_model_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marabert22_model_pipeline` is a English model originally trained by aya2003. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marabert22_model_pipeline_en_5.5.0_3.0_1725826062931.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marabert22_model_pipeline_en_5.5.0_3.0_1725826062931.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("marabert22_model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("marabert22_model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marabert22_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|610.9 MB| + +## References + +https://huggingface.co/aya2003/marabert22-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_en.md b/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_en.md new file mode 100644 index 00000000000000..ccbf719d0ec344 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430 MarianTransformer from Favourphilic +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430 +date: 2024-09-08 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430` is a English model originally trained by Favourphilic. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_en_5.5.0_3.0_1725825205012.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_en_5.5.0_3.0_1725825205012.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.1 MB| + +## References + +https://huggingface.co/Favourphilic/marian-finetuned-kde4-en-to-fr100424-1430 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_pipeline_en.md new file mode 100644 index 00000000000000..3d6c619cc171e1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_pipeline pipeline MarianTransformer from Favourphilic +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_pipeline` is a English model originally trained by Favourphilic. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_pipeline_en_5.5.0_3.0_1725825231664.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_pipeline_en_5.5.0_3.0_1725825231664.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_fr100424_1430_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|508.6 MB| + +## References + +https://huggingface.co/Favourphilic/marian-finetuned-kde4-en-to-fr100424-1430 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_fadliaulawi_en.md b/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_fadliaulawi_en.md new file mode 100644 index 00000000000000..f4e5de9b123167 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_fadliaulawi_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_fadliaulawi MarianTransformer from fadliaulawi +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_fadliaulawi +date: 2024-09-08 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_fadliaulawi` is a English model originally trained by fadliaulawi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_fadliaulawi_en_5.5.0_3.0_1725766353143.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_fadliaulawi_en_5.5.0_3.0_1725766353143.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_fadliaulawi","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_fadliaulawi","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_fadliaulawi| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.2 MB| + +## References + +https://huggingface.co/fadliaulawi/marian-finetuned-kde4-en-to-fr-accelerate \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_french_arham061_en.md b/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_french_arham061_en.md new file mode 100644 index 00000000000000..a179b0e110c028 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_french_arham061_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_french_arham061 MarianTransformer from arham061 +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_french_arham061 +date: 2024-09-08 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_french_arham061` is a English model originally trained by arham061. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_arham061_en_5.5.0_3.0_1725795726907.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_arham061_en_5.5.0_3.0_1725795726907.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_french_arham061","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_french_arham061","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_french_arham061| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.2 MB| + +## References + +https://huggingface.co/arham061/marian-finetuned-kde4-en-to-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_french_nielso_en.md b/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_french_nielso_en.md new file mode 100644 index 00000000000000..dd0fcf342bdc1e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_french_nielso_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_french_nielso MarianTransformer from nielso +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_french_nielso +date: 2024-09-08 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_french_nielso` is a English model originally trained by nielso. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_nielso_en_5.5.0_3.0_1725831290991.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_nielso_en_5.5.0_3.0_1725831290991.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_french_nielso","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_french_nielso","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_french_nielso| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.1 MB| + +## References + +https://huggingface.co/nielso/marian-finetuned-kde4-en-to-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_german_accelerate_translator_nlp_course_chapter7_section3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_german_accelerate_translator_nlp_course_chapter7_section3_pipeline_en.md new file mode 100644 index 00000000000000..f375acc1a7dc67 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-marian_finetuned_kde4_english_tonga_tonga_islands_german_accelerate_translator_nlp_course_chapter7_section3_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_german_accelerate_translator_nlp_course_chapter7_section3_pipeline pipeline MarianTransformer from BanUrsus +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_german_accelerate_translator_nlp_course_chapter7_section3_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_german_accelerate_translator_nlp_course_chapter7_section3_pipeline` is a English model originally trained by BanUrsus. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_german_accelerate_translator_nlp_course_chapter7_section3_pipeline_en_5.5.0_3.0_1725766574948.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_german_accelerate_translator_nlp_course_chapter7_section3_pipeline_en_5.5.0_3.0_1725766574948.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("marian_finetuned_kde4_english_tonga_tonga_islands_german_accelerate_translator_nlp_course_chapter7_section3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("marian_finetuned_kde4_english_tonga_tonga_islands_german_accelerate_translator_nlp_course_chapter7_section3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_german_accelerate_translator_nlp_course_chapter7_section3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|499.6 MB| + +## References + +https://huggingface.co/BanUrsus/marian-finetuned-kde4-en-to-de-accelerate-translator_nlp-course-chapter7-section3 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-marianmt_many2eng_leb_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-marianmt_many2eng_leb_pipeline_en.md new file mode 100644 index 00000000000000..7a9366e4231f9e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-marianmt_many2eng_leb_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English marianmt_many2eng_leb_pipeline pipeline MarianTransformer from jq +author: John Snow Labs +name: marianmt_many2eng_leb_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marianmt_many2eng_leb_pipeline` is a English model originally trained by jq. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marianmt_many2eng_leb_pipeline_en_5.5.0_3.0_1725824840511.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marianmt_many2eng_leb_pipeline_en_5.5.0_3.0_1725824840511.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("marianmt_many2eng_leb_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("marianmt_many2eng_leb_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marianmt_many2eng_leb_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|532.6 MB| + +## References + +https://huggingface.co/jq/marianmt_many2eng_leb + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-medical_english_chinese_8_21_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-medical_english_chinese_8_21_pipeline_en.md new file mode 100644 index 00000000000000..4454516852b7bb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-medical_english_chinese_8_21_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English medical_english_chinese_8_21_pipeline pipeline MarianTransformer from DogGoesBark +author: John Snow Labs +name: medical_english_chinese_8_21_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`medical_english_chinese_8_21_pipeline` is a English model originally trained by DogGoesBark. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/medical_english_chinese_8_21_pipeline_en_5.5.0_3.0_1725832594047.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/medical_english_chinese_8_21_pipeline_en_5.5.0_3.0_1725832594047.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("medical_english_chinese_8_21_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("medical_english_chinese_8_21_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|medical_english_chinese_8_21_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|541.3 MB| + +## References + +https://huggingface.co/DogGoesBark/medical_en_zh_8_21 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-menorbert2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-menorbert2_pipeline_en.md new file mode 100644 index 00000000000000..e1c513dc3b26ac --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-menorbert2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English menorbert2_pipeline pipeline BertForSequenceClassification from gregwinther +author: John Snow Labs +name: menorbert2_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`menorbert2_pipeline` is a English model originally trained by gregwinther. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/menorbert2_pipeline_en_5.5.0_3.0_1725819643704.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/menorbert2_pipeline_en_5.5.0_3.0_1725819643704.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("menorbert2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("menorbert2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|menorbert2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|467.4 MB| + +## References + +https://huggingface.co/gregwinther/MeNorBert2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-mentalroberta_4label_v2_2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-mentalroberta_4label_v2_2_pipeline_en.md new file mode 100644 index 00000000000000..901b0553acfe2c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-mentalroberta_4label_v2_2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English mentalroberta_4label_v2_2_pipeline pipeline RoBertaForSequenceClassification from AliaeAI +author: John Snow Labs +name: mentalroberta_4label_v2_2_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mentalroberta_4label_v2_2_pipeline` is a English model originally trained by AliaeAI. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mentalroberta_4label_v2_2_pipeline_en_5.5.0_3.0_1725821182850.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mentalroberta_4label_v2_2_pipeline_en_5.5.0_3.0_1725821182850.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("mentalroberta_4label_v2_2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("mentalroberta_4label_v2_2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mentalroberta_4label_v2_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/AliaeAI/MentalRoBERTa_4label_v2.2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-mlm_snodis_dis_descreption_epochs_5_en.md b/docs/_posts/ahmedlone127/2024-09-08-mlm_snodis_dis_descreption_epochs_5_en.md new file mode 100644 index 00000000000000..804274b154cf85 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-mlm_snodis_dis_descreption_epochs_5_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English mlm_snodis_dis_descreption_epochs_5 DistilBertEmbeddings from Milad1b +author: John Snow Labs +name: mlm_snodis_dis_descreption_epochs_5 +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mlm_snodis_dis_descreption_epochs_5` is a English model originally trained by Milad1b. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mlm_snodis_dis_descreption_epochs_5_en_5.5.0_3.0_1725776242905.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mlm_snodis_dis_descreption_epochs_5_en_5.5.0_3.0_1725776242905.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("mlm_snodis_dis_descreption_epochs_5","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("mlm_snodis_dis_descreption_epochs_5","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mlm_snodis_dis_descreption_epochs_5| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|505.3 MB| + +## References + +https://huggingface.co/Milad1b/MLM_snodis_dis_descreption_epochs-5 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-mminilm_l6_v2_english_portuguese_msmarco_v1_pt.md b/docs/_posts/ahmedlone127/2024-09-08-mminilm_l6_v2_english_portuguese_msmarco_v1_pt.md new file mode 100644 index 00000000000000..af1d2628c29c61 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-mminilm_l6_v2_english_portuguese_msmarco_v1_pt.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Portuguese mminilm_l6_v2_english_portuguese_msmarco_v1 XlmRoBertaForSequenceClassification from unicamp-dl +author: John Snow Labs +name: mminilm_l6_v2_english_portuguese_msmarco_v1 +date: 2024-09-08 +tags: [pt, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: pt +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mminilm_l6_v2_english_portuguese_msmarco_v1` is a Portuguese model originally trained by unicamp-dl. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mminilm_l6_v2_english_portuguese_msmarco_v1_pt_5.5.0_3.0_1725780922046.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mminilm_l6_v2_english_portuguese_msmarco_v1_pt_5.5.0_3.0_1725780922046.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("mminilm_l6_v2_english_portuguese_msmarco_v1","pt") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("mminilm_l6_v2_english_portuguese_msmarco_v1", "pt") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mminilm_l6_v2_english_portuguese_msmarco_v1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|pt| +|Size:|344.0 MB| + +## References + +https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-en-pt-msmarco-v1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-model_3_edges_anirudhramoo_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-model_3_edges_anirudhramoo_pipeline_en.md new file mode 100644 index 00000000000000..a521fa4eedd658 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-model_3_edges_anirudhramoo_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English model_3_edges_anirudhramoo_pipeline pipeline DistilBertForTokenClassification from anirudhramoo +author: John Snow Labs +name: model_3_edges_anirudhramoo_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`model_3_edges_anirudhramoo_pipeline` is a English model originally trained by anirudhramoo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/model_3_edges_anirudhramoo_pipeline_en_5.5.0_3.0_1725788436783.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/model_3_edges_anirudhramoo_pipeline_en_5.5.0_3.0_1725788436783.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("model_3_edges_anirudhramoo_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("model_3_edges_anirudhramoo_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|model_3_edges_anirudhramoo_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/anirudhramoo/model_3_edges + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-modeldistilbertmaskfinetunedimdb_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-modeldistilbertmaskfinetunedimdb_pipeline_en.md new file mode 100644 index 00000000000000..03e316cf3d30c7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-modeldistilbertmaskfinetunedimdb_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English modeldistilbertmaskfinetunedimdb_pipeline pipeline DistilBertEmbeddings from jayspring +author: John Snow Labs +name: modeldistilbertmaskfinetunedimdb_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`modeldistilbertmaskfinetunedimdb_pipeline` is a English model originally trained by jayspring. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/modeldistilbertmaskfinetunedimdb_pipeline_en_5.5.0_3.0_1725828691121.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/modeldistilbertmaskfinetunedimdb_pipeline_en_5.5.0_3.0_1725828691121.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("modeldistilbertmaskfinetunedimdb_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("modeldistilbertmaskfinetunedimdb_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|modeldistilbertmaskfinetunedimdb_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/jayspring/ModelDistilBertMaskFineTunedImdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-mpnet_base_all_nli_triplet_korruz_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-mpnet_base_all_nli_triplet_korruz_pipeline_en.md new file mode 100644 index 00000000000000..7f0056c5f38d1b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-mpnet_base_all_nli_triplet_korruz_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English mpnet_base_all_nli_triplet_korruz_pipeline pipeline MPNetEmbeddings from korruz +author: John Snow Labs +name: mpnet_base_all_nli_triplet_korruz_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mpnet_base_all_nli_triplet_korruz_pipeline` is a English model originally trained by korruz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mpnet_base_all_nli_triplet_korruz_pipeline_en_5.5.0_3.0_1725816126959.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mpnet_base_all_nli_triplet_korruz_pipeline_en_5.5.0_3.0_1725816126959.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("mpnet_base_all_nli_triplet_korruz_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("mpnet_base_all_nli_triplet_korruz_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mpnet_base_all_nli_triplet_korruz_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|390.1 MB| + +## References + +https://huggingface.co/korruz/mpnet-base-all-nli-triplet + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-mpnetwithoutchunking_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-mpnetwithoutchunking_pipeline_en.md new file mode 100644 index 00000000000000..9faaf198ae8bd3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-mpnetwithoutchunking_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English mpnetwithoutchunking_pipeline pipeline MPNetEmbeddings from GebeyaTalent +author: John Snow Labs +name: mpnetwithoutchunking_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mpnetwithoutchunking_pipeline` is a English model originally trained by GebeyaTalent. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mpnetwithoutchunking_pipeline_en_5.5.0_3.0_1725816591326.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mpnetwithoutchunking_pipeline_en_5.5.0_3.0_1725816591326.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("mpnetwithoutchunking_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("mpnetwithoutchunking_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mpnetwithoutchunking_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.7 MB| + +## References + +https://huggingface.co/GebeyaTalent/mpnetwithoutchunking + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-multi_qa_mpnet_base_cos_v1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-multi_qa_mpnet_base_cos_v1_pipeline_en.md new file mode 100644 index 00000000000000..c80f83d9e9eceb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-multi_qa_mpnet_base_cos_v1_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English multi_qa_mpnet_base_cos_v1_pipeline pipeline MPNetEmbeddings from syndi-models +author: John Snow Labs +name: multi_qa_mpnet_base_cos_v1_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`multi_qa_mpnet_base_cos_v1_pipeline` is a English model originally trained by syndi-models. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/multi_qa_mpnet_base_cos_v1_pipeline_en_5.5.0_3.0_1725816878429.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/multi_qa_mpnet_base_cos_v1_pipeline_en_5.5.0_3.0_1725816878429.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("multi_qa_mpnet_base_cos_v1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("multi_qa_mpnet_base_cos_v1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|multi_qa_mpnet_base_cos_v1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/syndi-models/multi-qa-mpnet-base-cos-v1 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-multi_qa_mpnet_base_dot_v1_covidqa_search_65_25_v2_2epoch_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-multi_qa_mpnet_base_dot_v1_covidqa_search_65_25_v2_2epoch_pipeline_en.md new file mode 100644 index 00000000000000..de004bd2dc8424 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-multi_qa_mpnet_base_dot_v1_covidqa_search_65_25_v2_2epoch_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English multi_qa_mpnet_base_dot_v1_covidqa_search_65_25_v2_2epoch_pipeline pipeline MPNetEmbeddings from checkiejan +author: John Snow Labs +name: multi_qa_mpnet_base_dot_v1_covidqa_search_65_25_v2_2epoch_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`multi_qa_mpnet_base_dot_v1_covidqa_search_65_25_v2_2epoch_pipeline` is a English model originally trained by checkiejan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/multi_qa_mpnet_base_dot_v1_covidqa_search_65_25_v2_2epoch_pipeline_en_5.5.0_3.0_1725817398958.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/multi_qa_mpnet_base_dot_v1_covidqa_search_65_25_v2_2epoch_pipeline_en_5.5.0_3.0_1725817398958.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("multi_qa_mpnet_base_dot_v1_covidqa_search_65_25_v2_2epoch_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("multi_qa_mpnet_base_dot_v1_covidqa_search_65_25_v2_2epoch_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|multi_qa_mpnet_base_dot_v1_covidqa_search_65_25_v2_2epoch_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|407.2 MB| + +## References + +https://huggingface.co/checkiejan/multi-qa-mpnet-base-dot-v1-covidqa-search-65-25-v2-2epoch + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-multi_qa_mpnet_base_dot_v1_deneme_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-multi_qa_mpnet_base_dot_v1_deneme_pipeline_en.md new file mode 100644 index 00000000000000..03385573d39e0b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-multi_qa_mpnet_base_dot_v1_deneme_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English multi_qa_mpnet_base_dot_v1_deneme_pipeline pipeline MPNetEmbeddings from mustozsarac +author: John Snow Labs +name: multi_qa_mpnet_base_dot_v1_deneme_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`multi_qa_mpnet_base_dot_v1_deneme_pipeline` is a English model originally trained by mustozsarac. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/multi_qa_mpnet_base_dot_v1_deneme_pipeline_en_5.5.0_3.0_1725816858583.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/multi_qa_mpnet_base_dot_v1_deneme_pipeline_en_5.5.0_3.0_1725816858583.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("multi_qa_mpnet_base_dot_v1_deneme_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("multi_qa_mpnet_base_dot_v1_deneme_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|multi_qa_mpnet_base_dot_v1_deneme_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/mustozsarac/multi-qa-mpnet-base-dot-v1-deneme + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-ner_cw_pipeline_underfitt_en.md b/docs/_posts/ahmedlone127/2024-09-08-ner_cw_pipeline_underfitt_en.md new file mode 100644 index 00000000000000..a13c7d9b4041d1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-ner_cw_pipeline_underfitt_en.md @@ -0,0 +1,66 @@ +--- +layout: model +title: English ner_cw_pipeline_underfitt pipeline DistilBertForTokenClassification from ArshiaKarimian +author: John Snow Labs +name: ner_cw_pipeline_underfitt +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ner_cw_pipeline_underfitt` is a English model originally trained by ArshiaKarimian. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ner_cw_pipeline_underfitt_en_5.5.0_3.0_1725828063920.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ner_cw_pipeline_underfitt_en_5.5.0_3.0_1725828063920.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("ner_cw_pipeline_underfitt", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("ner_cw_pipeline_underfitt", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ner_cw_pipeline_underfitt| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/ArshiaKarimian/NER_CW_pipeline_underfitt \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-opus_base_lsp_aon_wce_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-opus_base_lsp_aon_wce_pipeline_en.md new file mode 100644 index 00000000000000..f35bd227a4345e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-opus_base_lsp_aon_wce_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_base_lsp_aon_wce_pipeline pipeline MarianTransformer from ethansimrm +author: John Snow Labs +name: opus_base_lsp_aon_wce_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_base_lsp_aon_wce_pipeline` is a English model originally trained by ethansimrm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_base_lsp_aon_wce_pipeline_en_5.5.0_3.0_1725765532309.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_base_lsp_aon_wce_pipeline_en_5.5.0_3.0_1725765532309.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_base_lsp_aon_wce_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_base_lsp_aon_wce_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_base_lsp_aon_wce_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|509.0 MB| + +## References + +https://huggingface.co/ethansimrm/opus_base_lsp_AoN_wce + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-opus_big_lsp_simple_wce_en.md b/docs/_posts/ahmedlone127/2024-09-08-opus_big_lsp_simple_wce_en.md new file mode 100644 index 00000000000000..98920a61583286 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-opus_big_lsp_simple_wce_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_big_lsp_simple_wce MarianTransformer from ethansimrm +author: John Snow Labs +name: opus_big_lsp_simple_wce +date: 2024-09-08 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_big_lsp_simple_wce` is a English model originally trained by ethansimrm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_big_lsp_simple_wce_en_5.5.0_3.0_1725831593440.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_big_lsp_simple_wce_en_5.5.0_3.0_1725831593440.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_big_lsp_simple_wce","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_big_lsp_simple_wce","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_big_lsp_simple_wce| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/ethansimrm/opus_big_lsp_simple_wce \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_arabic_english_finetuned_arabic_tonga_tonga_islands_english_ani_baghdasaryan_en.md b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_arabic_english_finetuned_arabic_tonga_tonga_islands_english_ani_baghdasaryan_en.md new file mode 100644 index 00000000000000..8f2fc7cd78933b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_arabic_english_finetuned_arabic_tonga_tonga_islands_english_ani_baghdasaryan_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_arabic_english_finetuned_arabic_tonga_tonga_islands_english_ani_baghdasaryan MarianTransformer from ani-baghdasaryan +author: John Snow Labs +name: opus_maltese_arabic_english_finetuned_arabic_tonga_tonga_islands_english_ani_baghdasaryan +date: 2024-09-08 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_arabic_english_finetuned_arabic_tonga_tonga_islands_english_ani_baghdasaryan` is a English model originally trained by ani-baghdasaryan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_arabic_english_finetuned_arabic_tonga_tonga_islands_english_ani_baghdasaryan_en_5.5.0_3.0_1725825008476.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_arabic_english_finetuned_arabic_tonga_tonga_islands_english_ani_baghdasaryan_en_5.5.0_3.0_1725825008476.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_arabic_english_finetuned_arabic_tonga_tonga_islands_english_ani_baghdasaryan","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_arabic_english_finetuned_arabic_tonga_tonga_islands_english_ani_baghdasaryan","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_arabic_english_finetuned_arabic_tonga_tonga_islands_english_ani_baghdasaryan| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|527.9 MB| + +## References + +https://huggingface.co/ani-baghdasaryan/opus-mt-ar-en-finetuned-ar-to-en \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instances_un_multi_leaningrate2e_05_batchsize8_11_action_1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instances_un_multi_leaningrate2e_05_batchsize8_11_action_1_pipeline_en.md new file mode 100644 index 00000000000000..d09048445b6676 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instances_un_multi_leaningrate2e_05_batchsize8_11_action_1_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instances_un_multi_leaningrate2e_05_batchsize8_11_action_1_pipeline pipeline MarianTransformer from meghazisofiane +author: John Snow Labs +name: opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instances_un_multi_leaningrate2e_05_batchsize8_11_action_1_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instances_un_multi_leaningrate2e_05_batchsize8_11_action_1_pipeline` is a English model originally trained by meghazisofiane. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instances_un_multi_leaningrate2e_05_batchsize8_11_action_1_pipeline_en_5.5.0_3.0_1725824504191.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instances_un_multi_leaningrate2e_05_batchsize8_11_action_1_pipeline_en_5.5.0_3.0_1725824504191.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instances_un_multi_leaningrate2e_05_batchsize8_11_action_1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instances_un_multi_leaningrate2e_05_batchsize8_11_action_1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instances_un_multi_leaningrate2e_05_batchsize8_11_action_1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|528.7 MB| + +## References + +https://huggingface.co/meghazisofiane/opus-mt-en-ar-evaluated-en-to-ar-1000instances-un_multi-leaningRate2e-05-batchSize8-11-action-1 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instancesopus_leaningrate2e_05_batchsize8_11epoch_3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instancesopus_leaningrate2e_05_batchsize8_11epoch_3_pipeline_en.md new file mode 100644 index 00000000000000..873d779a31c5f3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instancesopus_leaningrate2e_05_batchsize8_11epoch_3_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instancesopus_leaningrate2e_05_batchsize8_11epoch_3_pipeline pipeline MarianTransformer from meghazisofiane +author: John Snow Labs +name: opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instancesopus_leaningrate2e_05_batchsize8_11epoch_3_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instancesopus_leaningrate2e_05_batchsize8_11epoch_3_pipeline` is a English model originally trained by meghazisofiane. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instancesopus_leaningrate2e_05_batchsize8_11epoch_3_pipeline_en_5.5.0_3.0_1725832328393.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instancesopus_leaningrate2e_05_batchsize8_11epoch_3_pipeline_en_5.5.0_3.0_1725832328393.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instancesopus_leaningrate2e_05_batchsize8_11epoch_3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instancesopus_leaningrate2e_05_batchsize8_11epoch_3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_arabic_evaluated_english_tonga_tonga_islands_arabic_1000instancesopus_leaningrate2e_05_batchsize8_11epoch_3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|528.7 MB| + +## References + +https://huggingface.co/meghazisofiane/opus-mt-en-ar-evaluated-en-to-ar-1000instancesopus-leaningRate2e-05-batchSize8-11epoch-3 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_chinese_finetuned_eng_tonga_tonga_islands_chn_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_chinese_finetuned_eng_tonga_tonga_islands_chn_pipeline_en.md new file mode 100644 index 00000000000000..e580b6eb86a873 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_chinese_finetuned_eng_tonga_tonga_islands_chn_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_english_chinese_finetuned_eng_tonga_tonga_islands_chn_pipeline pipeline MarianTransformer from ketong3906 +author: John Snow Labs +name: opus_maltese_english_chinese_finetuned_eng_tonga_tonga_islands_chn_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_chinese_finetuned_eng_tonga_tonga_islands_chn_pipeline` is a English model originally trained by ketong3906. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_chinese_finetuned_eng_tonga_tonga_islands_chn_pipeline_en_5.5.0_3.0_1725824316374.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_chinese_finetuned_eng_tonga_tonga_islands_chn_pipeline_en_5.5.0_3.0_1725824316374.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_english_chinese_finetuned_eng_tonga_tonga_islands_chn_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_english_chinese_finetuned_eng_tonga_tonga_islands_chn_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_chinese_finetuned_eng_tonga_tonga_islands_chn_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|541.2 MB| + +## References + +https://huggingface.co/ketong3906/opus-mt-en-zh-finetuned-eng-to-chn + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_ganda_finetuned_english_tonga_tonga_islands_ganda_finetuned_english_tonga_tonga_islands_ganda_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_ganda_finetuned_english_tonga_tonga_islands_ganda_finetuned_english_tonga_tonga_islands_ganda_pipeline_en.md new file mode 100644 index 00000000000000..76520af9bb91a6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_ganda_finetuned_english_tonga_tonga_islands_ganda_finetuned_english_tonga_tonga_islands_ganda_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_english_ganda_finetuned_english_tonga_tonga_islands_ganda_finetuned_english_tonga_tonga_islands_ganda_pipeline pipeline MarianTransformer from Tobius +author: John Snow Labs +name: opus_maltese_english_ganda_finetuned_english_tonga_tonga_islands_ganda_finetuned_english_tonga_tonga_islands_ganda_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_ganda_finetuned_english_tonga_tonga_islands_ganda_finetuned_english_tonga_tonga_islands_ganda_pipeline` is a English model originally trained by Tobius. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_ganda_finetuned_english_tonga_tonga_islands_ganda_finetuned_english_tonga_tonga_islands_ganda_pipeline_en_5.5.0_3.0_1725765723705.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_ganda_finetuned_english_tonga_tonga_islands_ganda_finetuned_english_tonga_tonga_islands_ganda_pipeline_en_5.5.0_3.0_1725765723705.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_english_ganda_finetuned_english_tonga_tonga_islands_ganda_finetuned_english_tonga_tonga_islands_ganda_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_english_ganda_finetuned_english_tonga_tonga_islands_ganda_finetuned_english_tonga_tonga_islands_ganda_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_ganda_finetuned_english_tonga_tonga_islands_ganda_finetuned_english_tonga_tonga_islands_ganda_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|515.6 MB| + +## References + +https://huggingface.co/Tobius/opus-mt-en-lg-finetuned-en-to-lg-finetuned-en-to-lg + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_clean_marianmt_en.md b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_clean_marianmt_en.md new file mode 100644 index 00000000000000..0f8b42f775f554 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_clean_marianmt_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_clean_marianmt MarianTransformer from himanshubeniwal +author: John Snow Labs +name: opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_clean_marianmt +date: 2024-09-08 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_clean_marianmt` is a English model originally trained by himanshubeniwal. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_clean_marianmt_en_5.5.0_3.0_1725825039902.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_clean_marianmt_en_5.5.0_3.0_1725825039902.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_clean_marianmt","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_clean_marianmt","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_clean_marianmt| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.6 MB| + +## References + +https://huggingface.co/himanshubeniwal/opus-mt-en-ro-finetuned-en-to-ro-clean-MarianMT \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_spanish_finetuned_english_tonga_tonga_islands_spanish_tamil_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_spanish_finetuned_english_tonga_tonga_islands_spanish_tamil_pipeline_en.md new file mode 100644 index 00000000000000..3554c1d9cd3ac6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_spanish_finetuned_english_tonga_tonga_islands_spanish_tamil_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_english_spanish_finetuned_english_tonga_tonga_islands_spanish_tamil_pipeline pipeline MarianTransformer from edu-shok +author: John Snow Labs +name: opus_maltese_english_spanish_finetuned_english_tonga_tonga_islands_spanish_tamil_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_spanish_finetuned_english_tonga_tonga_islands_spanish_tamil_pipeline` is a English model originally trained by edu-shok. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_spanish_finetuned_english_tonga_tonga_islands_spanish_tamil_pipeline_en_5.5.0_3.0_1725796056024.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_spanish_finetuned_english_tonga_tonga_islands_spanish_tamil_pipeline_en_5.5.0_3.0_1725796056024.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_english_spanish_finetuned_english_tonga_tonga_islands_spanish_tamil_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_english_spanish_finetuned_english_tonga_tonga_islands_spanish_tamil_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_spanish_finetuned_english_tonga_tonga_islands_spanish_tamil_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|540.6 MB| + +## References + +https://huggingface.co/edu-shok/opus-mt-en-es-finetuned-en-to-es-TA + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_vietnamese_own_finetuned_english_tonga_tonga_islands_vietnamese_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_vietnamese_own_finetuned_english_tonga_tonga_islands_vietnamese_pipeline_en.md new file mode 100644 index 00000000000000..308f54f505e654 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_english_vietnamese_own_finetuned_english_tonga_tonga_islands_vietnamese_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_english_vietnamese_own_finetuned_english_tonga_tonga_islands_vietnamese_pipeline pipeline MarianTransformer from ncduy +author: John Snow Labs +name: opus_maltese_english_vietnamese_own_finetuned_english_tonga_tonga_islands_vietnamese_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_vietnamese_own_finetuned_english_tonga_tonga_islands_vietnamese_pipeline` is a English model originally trained by ncduy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_vietnamese_own_finetuned_english_tonga_tonga_islands_vietnamese_pipeline_en_5.5.0_3.0_1725765533105.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_vietnamese_own_finetuned_english_tonga_tonga_islands_vietnamese_pipeline_en_5.5.0_3.0_1725765533105.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_english_vietnamese_own_finetuned_english_tonga_tonga_islands_vietnamese_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_english_vietnamese_own_finetuned_english_tonga_tonga_islands_vietnamese_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_vietnamese_own_finetuned_english_tonga_tonga_islands_vietnamese_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|477.2 MB| + +## References + +https://huggingface.co/ncduy/opus-mt-en-vi-own-finetuned-en-to-vi + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_korean_english_finetuned_korean_tonga_tonga_islands_english_2780616_en.md b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_korean_english_finetuned_korean_tonga_tonga_islands_english_2780616_en.md new file mode 100644 index 00000000000000..394cd1ef7b76be --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_korean_english_finetuned_korean_tonga_tonga_islands_english_2780616_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_korean_english_finetuned_korean_tonga_tonga_islands_english_2780616 MarianTransformer from alphahg +author: John Snow Labs +name: opus_maltese_korean_english_finetuned_korean_tonga_tonga_islands_english_2780616 +date: 2024-09-08 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_korean_english_finetuned_korean_tonga_tonga_islands_english_2780616` is a English model originally trained by alphahg. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_korean_english_finetuned_korean_tonga_tonga_islands_english_2780616_en_5.5.0_3.0_1725795659426.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_korean_english_finetuned_korean_tonga_tonga_islands_english_2780616_en_5.5.0_3.0_1725795659426.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_korean_english_finetuned_korean_tonga_tonga_islands_english_2780616","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_korean_english_finetuned_korean_tonga_tonga_islands_english_2780616","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_korean_english_finetuned_korean_tonga_tonga_islands_english_2780616| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|540.7 MB| + +## References + +https://huggingface.co/alphahg/opus-mt-ko-en-finetuned-ko-to-en-2780616 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_russian_english_finetuned_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_russian_english_finetuned_pipeline_en.md new file mode 100644 index 00000000000000..e3416e5dea45e3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-opus_maltese_russian_english_finetuned_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_russian_english_finetuned_pipeline pipeline MarianTransformer from kazandaev +author: John Snow Labs +name: opus_maltese_russian_english_finetuned_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_russian_english_finetuned_pipeline` is a English model originally trained by kazandaev. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_russian_english_finetuned_pipeline_en_5.5.0_3.0_1725824708030.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_russian_english_finetuned_pipeline_en_5.5.0_3.0_1725824708030.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_russian_english_finetuned_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_russian_english_finetuned_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_russian_english_finetuned_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|527.5 MB| + +## References + +https://huggingface.co/kazandaev/opus-mt-ru-en-finetuned + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-output_en.md b/docs/_posts/ahmedlone127/2024-09-08-output_en.md new file mode 100644 index 00000000000000..225a7d6091b105 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-output_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English output DistilBertEmbeddings from soyisauce +author: John Snow Labs +name: output +date: 2024-09-08 +tags: [distilbert, en, open_source, fill_mask, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`output` is a English model originally trained by soyisauce. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/output_en_5.5.0_3.0_1725766562290.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/output_en_5.5.0_3.0_1725766562290.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +document_assembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("documents") + + +embeddings =DistilBertEmbeddings.pretrained("output","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([document_assembler, embeddings]) + +pipelineModel = pipeline.fit(data) + +pipelineDF = pipelineModel.transform(data) +``` +```scala +val document_assembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("embeddings") + +val embeddings = DistilBertEmbeddings + .pretrained("output", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(document_assembler, embeddings)) + +val pipelineModel = pipeline.fit(data) + +val pipelineDF = pipelineModel.transform(data) +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|output| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.4 MB| + +## References + +References + +References + +https://huggingface.co/soyisauce/output \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-passage_ranker_pistachio_pipeline_xx.md b/docs/_posts/ahmedlone127/2024-09-08-passage_ranker_pistachio_pipeline_xx.md new file mode 100644 index 00000000000000..922723929a2ffb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-passage_ranker_pistachio_pipeline_xx.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Multilingual passage_ranker_pistachio_pipeline pipeline BertForSequenceClassification from sinequa +author: John Snow Labs +name: passage_ranker_pistachio_pipeline +date: 2024-09-08 +tags: [xx, open_source, pipeline, onnx] +task: Text Classification +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`passage_ranker_pistachio_pipeline` is a Multilingual model originally trained by sinequa. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/passage_ranker_pistachio_pipeline_xx_5.5.0_3.0_1725825701422.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/passage_ranker_pistachio_pipeline_xx_5.5.0_3.0_1725825701422.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("passage_ranker_pistachio_pipeline", lang = "xx") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("passage_ranker_pistachio_pipeline", lang = "xx") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|passage_ranker_pistachio_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|xx| +|Size:|627.7 MB| + +## References + +https://huggingface.co/sinequa/passage-ranker.pistachio + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-passage_ranker_v1_l_multilingual_pipeline_xx.md b/docs/_posts/ahmedlone127/2024-09-08-passage_ranker_v1_l_multilingual_pipeline_xx.md new file mode 100644 index 00000000000000..38d820bb1ad271 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-passage_ranker_v1_l_multilingual_pipeline_xx.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Multilingual passage_ranker_v1_l_multilingual_pipeline pipeline BertForSequenceClassification from sinequa +author: John Snow Labs +name: passage_ranker_v1_l_multilingual_pipeline +date: 2024-09-08 +tags: [xx, open_source, pipeline, onnx] +task: Text Classification +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`passage_ranker_v1_l_multilingual_pipeline` is a Multilingual model originally trained by sinequa. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/passage_ranker_v1_l_multilingual_pipeline_xx_5.5.0_3.0_1725801624181.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/passage_ranker_v1_l_multilingual_pipeline_xx_5.5.0_3.0_1725801624181.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("passage_ranker_v1_l_multilingual_pipeline", lang = "xx") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("passage_ranker_v1_l_multilingual_pipeline", lang = "xx") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|passage_ranker_v1_l_multilingual_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|xx| +|Size:|466.7 MB| + +## References + +https://huggingface.co/sinequa/passage-ranker-v1-L-multilingual + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-pebblo_classifier_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-pebblo_classifier_pipeline_en.md new file mode 100644 index 00000000000000..307999c62380a1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-pebblo_classifier_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English pebblo_classifier_pipeline pipeline DistilBertForSequenceClassification from daxa-ai +author: John Snow Labs +name: pebblo_classifier_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`pebblo_classifier_pipeline` is a English model originally trained by daxa-ai. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/pebblo_classifier_pipeline_en_5.5.0_3.0_1725764476023.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/pebblo_classifier_pipeline_en_5.5.0_3.0_1725764476023.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("pebblo_classifier_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("pebblo_classifier_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|pebblo_classifier_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/daxa-ai/pebblo-classifier + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-psais_all_mpnet_base_v2_20shot_en.md b/docs/_posts/ahmedlone127/2024-09-08-psais_all_mpnet_base_v2_20shot_en.md new file mode 100644 index 00000000000000..e4013671617fdc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-psais_all_mpnet_base_v2_20shot_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English psais_all_mpnet_base_v2_20shot MPNetEmbeddings from hroth01 +author: John Snow Labs +name: psais_all_mpnet_base_v2_20shot +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`psais_all_mpnet_base_v2_20shot` is a English model originally trained by hroth01. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/psais_all_mpnet_base_v2_20shot_en_5.5.0_3.0_1725817400363.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/psais_all_mpnet_base_v2_20shot_en_5.5.0_3.0_1725817400363.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("psais_all_mpnet_base_v2_20shot","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("psais_all_mpnet_base_v2_20shot","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|psais_all_mpnet_base_v2_20shot| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/hroth01/psais-all-mpnet-base-v2-20shot \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-psais_multi_qa_mpnet_base_dot_v1_1shot_en.md b/docs/_posts/ahmedlone127/2024-09-08-psais_multi_qa_mpnet_base_dot_v1_1shot_en.md new file mode 100644 index 00000000000000..2ebd5454e9bc20 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-psais_multi_qa_mpnet_base_dot_v1_1shot_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English psais_multi_qa_mpnet_base_dot_v1_1shot MPNetEmbeddings from hroth +author: John Snow Labs +name: psais_multi_qa_mpnet_base_dot_v1_1shot +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`psais_multi_qa_mpnet_base_dot_v1_1shot` is a English model originally trained by hroth. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/psais_multi_qa_mpnet_base_dot_v1_1shot_en_5.5.0_3.0_1725817501081.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/psais_multi_qa_mpnet_base_dot_v1_1shot_en_5.5.0_3.0_1725817501081.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("psais_multi_qa_mpnet_base_dot_v1_1shot","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("psais_multi_qa_mpnet_base_dot_v1_1shot","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|psais_multi_qa_mpnet_base_dot_v1_1shot| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.0 MB| + +## References + +https://huggingface.co/hroth/psais-multi-qa-mpnet-base-dot-v1-1shot \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-psais_multi_qa_mpnet_base_dot_v1_1shot_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-psais_multi_qa_mpnet_base_dot_v1_1shot_pipeline_en.md new file mode 100644 index 00000000000000..2ece54b52681cb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-psais_multi_qa_mpnet_base_dot_v1_1shot_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English psais_multi_qa_mpnet_base_dot_v1_1shot_pipeline pipeline MPNetEmbeddings from hroth +author: John Snow Labs +name: psais_multi_qa_mpnet_base_dot_v1_1shot_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`psais_multi_qa_mpnet_base_dot_v1_1shot_pipeline` is a English model originally trained by hroth. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/psais_multi_qa_mpnet_base_dot_v1_1shot_pipeline_en_5.5.0_3.0_1725817522335.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/psais_multi_qa_mpnet_base_dot_v1_1shot_pipeline_en_5.5.0_3.0_1725817522335.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("psais_multi_qa_mpnet_base_dot_v1_1shot_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("psais_multi_qa_mpnet_base_dot_v1_1shot_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|psais_multi_qa_mpnet_base_dot_v1_1shot_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|407.0 MB| + +## References + +https://huggingface.co/hroth/psais-multi-qa-mpnet-base-dot-v1-1shot + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-q2e_333_en.md b/docs/_posts/ahmedlone127/2024-09-08-q2e_333_en.md new file mode 100644 index 00000000000000..90f84c628567f9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-q2e_333_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English q2e_333 MPNetEmbeddings from ingeol +author: John Snow Labs +name: q2e_333 +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`q2e_333` is a English model originally trained by ingeol. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/q2e_333_en_5.5.0_3.0_1725816556218.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/q2e_333_en_5.5.0_3.0_1725816556218.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("q2e_333","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("q2e_333","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|q2e_333| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/ingeol/q2e_333 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-q2e_ep3_1234_en.md b/docs/_posts/ahmedlone127/2024-09-08-q2e_ep3_1234_en.md new file mode 100644 index 00000000000000..e88e0ee1bed93d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-q2e_ep3_1234_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English q2e_ep3_1234 MPNetEmbeddings from ingeol +author: John Snow Labs +name: q2e_ep3_1234 +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`q2e_ep3_1234` is a English model originally trained by ingeol. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/q2e_ep3_1234_en_5.5.0_3.0_1725769209930.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/q2e_ep3_1234_en_5.5.0_3.0_1725769209930.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("q2e_ep3_1234","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("q2e_ep3_1234","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|q2e_ep3_1234| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/ingeol/q2e_ep3_1234 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-qa_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-qa_pipeline_en.md new file mode 100644 index 00000000000000..77ada57ad7c178 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-qa_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English qa_pipeline pipeline DistilBertForQuestionAnswering from Ateeb +author: John Snow Labs +name: qa_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`qa_pipeline` is a English model originally trained by Ateeb. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/qa_pipeline_en_5.5.0_3.0_1725818638209.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/qa_pipeline_en_5.5.0_3.0_1725818638209.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("qa_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("qa_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|qa_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Ateeb/QA + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-qnli_microsoft_deberta_v3_base_seed_2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-qnli_microsoft_deberta_v3_base_seed_2_pipeline_en.md new file mode 100644 index 00000000000000..a2feba1ad696b8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-qnli_microsoft_deberta_v3_base_seed_2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English qnli_microsoft_deberta_v3_base_seed_2_pipeline pipeline DeBertaForSequenceClassification from utahnlp +author: John Snow Labs +name: qnli_microsoft_deberta_v3_base_seed_2_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`qnli_microsoft_deberta_v3_base_seed_2_pipeline` is a English model originally trained by utahnlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/qnli_microsoft_deberta_v3_base_seed_2_pipeline_en_5.5.0_3.0_1725812056691.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/qnli_microsoft_deberta_v3_base_seed_2_pipeline_en_5.5.0_3.0_1725812056691.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("qnli_microsoft_deberta_v3_base_seed_2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("qnli_microsoft_deberta_v3_base_seed_2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|qnli_microsoft_deberta_v3_base_seed_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|637.9 MB| + +## References + +https://huggingface.co/utahnlp/qnli_microsoft_deberta-v3-base_seed-2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-query_only_5_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-query_only_5_pipeline_en.md new file mode 100644 index 00000000000000..d20ed62a1a0839 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-query_only_5_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English query_only_5_pipeline pipeline MPNetEmbeddings from ingeol +author: John Snow Labs +name: query_only_5_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`query_only_5_pipeline` is a English model originally trained by ingeol. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/query_only_5_pipeline_en_5.5.0_3.0_1725817397783.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/query_only_5_pipeline_en_5.5.0_3.0_1725817397783.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("query_only_5_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("query_only_5_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|query_only_5_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/ingeol/query_only_5 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-question_answering_model_jethrowang_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-question_answering_model_jethrowang_pipeline_en.md new file mode 100644 index 00000000000000..0dc578ffb432d5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-question_answering_model_jethrowang_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English question_answering_model_jethrowang_pipeline pipeline DistilBertForQuestionAnswering from jethrowang +author: John Snow Labs +name: question_answering_model_jethrowang_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`question_answering_model_jethrowang_pipeline` is a English model originally trained by jethrowang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/question_answering_model_jethrowang_pipeline_en_5.5.0_3.0_1725823141863.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/question_answering_model_jethrowang_pipeline_en_5.5.0_3.0_1725823141863.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("question_answering_model_jethrowang_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("question_answering_model_jethrowang_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|question_answering_model_jethrowang_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/jethrowang/question_answering_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-question_oriya_statement_en.md b/docs/_posts/ahmedlone127/2024-09-08-question_oriya_statement_en.md new file mode 100644 index 00000000000000..7718b744eee478 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-question_oriya_statement_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English question_oriya_statement RoBertaForSequenceClassification from nikolasmoya +author: John Snow Labs +name: question_oriya_statement +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`question_oriya_statement` is a English model originally trained by nikolasmoya. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/question_oriya_statement_en_5.5.0_3.0_1725829924004.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/question_oriya_statement_en_5.5.0_3.0_1725829924004.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("question_oriya_statement","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("question_oriya_statement", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|question_oriya_statement| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|466.6 MB| + +## References + +https://huggingface.co/nikolasmoya/question-or-statement \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-res_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-res_pipeline_en.md new file mode 100644 index 00000000000000..439da309f42b0c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-res_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English res_pipeline pipeline DistilBertForQuestionAnswering from artiert +author: John Snow Labs +name: res_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`res_pipeline` is a English model originally trained by artiert. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/res_pipeline_en_5.5.0_3.0_1725798139438.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/res_pipeline_en_5.5.0_3.0_1725798139438.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("res_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("res_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|res_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|250.1 MB| + +## References + +https://huggingface.co/artiert/res + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-results_forwarder1121_en.md b/docs/_posts/ahmedlone127/2024-09-08-results_forwarder1121_en.md new file mode 100644 index 00000000000000..6c6b1a5136ebe2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-results_forwarder1121_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English results_forwarder1121 DistilBertForSequenceClassification from forwarder1121 +author: John Snow Labs +name: results_forwarder1121 +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`results_forwarder1121` is a English model originally trained by forwarder1121. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/results_forwarder1121_en_5.5.0_3.0_1725808932283.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/results_forwarder1121_en_5.5.0_3.0_1725808932283.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("results_forwarder1121","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("results_forwarder1121", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|results_forwarder1121| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/forwarder1121/results \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-rise_ner_distilbert_base_cased_system_b_v2_en.md b/docs/_posts/ahmedlone127/2024-09-08-rise_ner_distilbert_base_cased_system_b_v2_en.md new file mode 100644 index 00000000000000..3f24e0cf29c3c8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-rise_ner_distilbert_base_cased_system_b_v2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English rise_ner_distilbert_base_cased_system_b_v2 DistilBertForTokenClassification from petersamoaa +author: John Snow Labs +name: rise_ner_distilbert_base_cased_system_b_v2 +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`rise_ner_distilbert_base_cased_system_b_v2` is a English model originally trained by petersamoaa. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/rise_ner_distilbert_base_cased_system_b_v2_en_5.5.0_3.0_1725837477686.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/rise_ner_distilbert_base_cased_system_b_v2_en_5.5.0_3.0_1725837477686.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("rise_ner_distilbert_base_cased_system_b_v2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("rise_ner_distilbert_base_cased_system_b_v2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|rise_ner_distilbert_base_cased_system_b_v2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|243.9 MB| + +## References + +https://huggingface.co/petersamoaa/rise-ner-distilbert-base-cased-system-b-v2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-roberta_base_bne_finetuned_amazon_reviews_multi_jorgemariocalvo_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-roberta_base_bne_finetuned_amazon_reviews_multi_jorgemariocalvo_pipeline_en.md new file mode 100644 index 00000000000000..463db106a6d043 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-roberta_base_bne_finetuned_amazon_reviews_multi_jorgemariocalvo_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_bne_finetuned_amazon_reviews_multi_jorgemariocalvo_pipeline pipeline RoBertaForSequenceClassification from jorgemariocalvo +author: John Snow Labs +name: roberta_base_bne_finetuned_amazon_reviews_multi_jorgemariocalvo_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_bne_finetuned_amazon_reviews_multi_jorgemariocalvo_pipeline` is a English model originally trained by jorgemariocalvo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_bne_finetuned_amazon_reviews_multi_jorgemariocalvo_pipeline_en_5.5.0_3.0_1725830948552.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_bne_finetuned_amazon_reviews_multi_jorgemariocalvo_pipeline_en_5.5.0_3.0_1725830948552.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_bne_finetuned_amazon_reviews_multi_jorgemariocalvo_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_bne_finetuned_amazon_reviews_multi_jorgemariocalvo_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_bne_finetuned_amazon_reviews_multi_jorgemariocalvo_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|446.8 MB| + +## References + +https://huggingface.co/jorgemariocalvo/roberta-base-bne-finetuned-amazon_reviews_multi + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-roberta_base_squad_i8_f32_p70_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-roberta_base_squad_i8_f32_p70_pipeline_en.md new file mode 100644 index 00000000000000..df5830195dcd78 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-roberta_base_squad_i8_f32_p70_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English roberta_base_squad_i8_f32_p70_pipeline pipeline RoBertaForQuestionAnswering from pminha +author: John Snow Labs +name: roberta_base_squad_i8_f32_p70_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_squad_i8_f32_p70_pipeline` is a English model originally trained by pminha. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_squad_i8_f32_p70_pipeline_en_5.5.0_3.0_1725833494394.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_squad_i8_f32_p70_pipeline_en_5.5.0_3.0_1725833494394.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_squad_i8_f32_p70_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_squad_i8_f32_p70_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_squad_i8_f32_p70_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|202.9 MB| + +## References + +https://huggingface.co/pminha/roberta-base-squad-i8-f32-p70 + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-roberta_large_argugpt_en.md b/docs/_posts/ahmedlone127/2024-09-08-roberta_large_argugpt_en.md new file mode 100644 index 00000000000000..60d2970217ce5d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-roberta_large_argugpt_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_large_argugpt RoBertaForSequenceClassification from SJTU-CL +author: John Snow Labs +name: roberta_large_argugpt +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_large_argugpt` is a English model originally trained by SJTU-CL. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_large_argugpt_en_5.5.0_3.0_1725829800472.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_large_argugpt_en_5.5.0_3.0_1725829800472.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_large_argugpt","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_large_argugpt", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_large_argugpt| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/SJTU-CL/RoBERTa-large-ArguGPT \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-roberta_qa_base_squad_nl.md b/docs/_posts/ahmedlone127/2024-09-08-roberta_qa_base_squad_nl.md new file mode 100644 index 00000000000000..357693cb3af0d7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-roberta_qa_base_squad_nl.md @@ -0,0 +1,92 @@ +--- +layout: model +title: Dutch RobertaForQuestionAnswering Base Cased model (from Nadav) +author: John Snow Labs +name: roberta_qa_base_squad +date: 2024-09-08 +tags: [nl, open_source, roberta, question_answering, onnx] +task: Question Answering +language: nl +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RobertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `roberta-base-squad-nl` is a Dutch model originally trained by `Nadav`. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_qa_base_squad_nl_5.5.0_3.0_1725833484403.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_qa_base_squad_nl_5.5.0_3.0_1725833484403.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +Document_Assembler = MultiDocumentAssembler()\ + .setInputCols(["question", "context"])\ + .setOutputCols(["document_question", "document_context"]) + +Question_Answering = RoBertaForQuestionAnswering.pretrained("roberta_qa_base_squad","nl")\ + .setInputCols(["document_question", "document_context"])\ + .setOutputCol("answer")\ + .setCaseSensitive(True) + +pipeline = Pipeline(stages=[Document_Assembler, Question_Answering]) + +data = spark.createDataFrame([["What's my name?","My name is Clara and I live in Berkeley."]]).toDF("question", "context") + +result = pipeline.fit(data).transform(data) +``` +```scala +val Document_Assembler = new MultiDocumentAssembler() + .setInputCols(Array("question", "context")) + .setOutputCols(Array("document_question", "document_context")) + +val Question_Answering = RoBertaForQuestionAnswering.pretrained("roberta_qa_base_squad","nl") + .setInputCols(Array("document_question", "document_context")) + .setOutputCol("answer") + .setCaseSensitive(true) + +val pipeline = new Pipeline().setStages(Array(Document_Assembler, Question_Answering)) + +val data = Seq("What's my name?","My name is Clara and I live in Berkeley.").toDS.toDF("question", "context") + +val result = pipeline.fit(data).transform(data) +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_qa_base_squad| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|nl| +|Size:|435.7 MB| + +## References + +References + +- https://huggingface.co/Nadav/roberta-base-squad-nl \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-rotten_tomatoes_microsoft_deberta_v3_base_seed_1_en.md b/docs/_posts/ahmedlone127/2024-09-08-rotten_tomatoes_microsoft_deberta_v3_base_seed_1_en.md new file mode 100644 index 00000000000000..e823cce6d2cfda --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-rotten_tomatoes_microsoft_deberta_v3_base_seed_1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English rotten_tomatoes_microsoft_deberta_v3_base_seed_1 DeBertaForSequenceClassification from utahnlp +author: John Snow Labs +name: rotten_tomatoes_microsoft_deberta_v3_base_seed_1 +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, deberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`rotten_tomatoes_microsoft_deberta_v3_base_seed_1` is a English model originally trained by utahnlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/rotten_tomatoes_microsoft_deberta_v3_base_seed_1_en_5.5.0_3.0_1725802954461.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/rotten_tomatoes_microsoft_deberta_v3_base_seed_1_en_5.5.0_3.0_1725802954461.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DeBertaForSequenceClassification.pretrained("rotten_tomatoes_microsoft_deberta_v3_base_seed_1","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DeBertaForSequenceClassification.pretrained("rotten_tomatoes_microsoft_deberta_v3_base_seed_1", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|rotten_tomatoes_microsoft_deberta_v3_base_seed_1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|577.8 MB| + +## References + +https://huggingface.co/utahnlp/rotten_tomatoes_microsoft_deberta-v3-base_seed-1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-russian_sentiment_classification_model_en.md b/docs/_posts/ahmedlone127/2024-09-08-russian_sentiment_classification_model_en.md new file mode 100644 index 00000000000000..a1d9817330fb62 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-russian_sentiment_classification_model_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English russian_sentiment_classification_model BertForSequenceClassification from annavtkn +author: John Snow Labs +name: russian_sentiment_classification_model +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`russian_sentiment_classification_model` is a English model originally trained by annavtkn. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/russian_sentiment_classification_model_en_5.5.0_3.0_1725825981878.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/russian_sentiment_classification_model_en_5.5.0_3.0_1725825981878.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("russian_sentiment_classification_model","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("russian_sentiment_classification_model", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|russian_sentiment_classification_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|666.5 MB| + +## References + +https://huggingface.co/annavtkn/ru_sentiment_classification_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-sanskrit_saskta_qna_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-sanskrit_saskta_qna_pipeline_en.md new file mode 100644 index 00000000000000..549b5251c0f463 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-sanskrit_saskta_qna_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English sanskrit_saskta_qna_pipeline pipeline DistilBertForQuestionAnswering from Sachinkelenjaguri +author: John Snow Labs +name: sanskrit_saskta_qna_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sanskrit_saskta_qna_pipeline` is a English model originally trained by Sachinkelenjaguri. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sanskrit_saskta_qna_pipeline_en_5.5.0_3.0_1725798133176.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sanskrit_saskta_qna_pipeline_en_5.5.0_3.0_1725798133176.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sanskrit_saskta_qna_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sanskrit_saskta_qna_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sanskrit_saskta_qna_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|248.0 MB| + +## References + +https://huggingface.co/Sachinkelenjaguri/sa_Qna + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-scenario_tcr_data_cardiffnlp_tweet_sentiment_multilingual_all_d_xx.md b/docs/_posts/ahmedlone127/2024-09-08-scenario_tcr_data_cardiffnlp_tweet_sentiment_multilingual_all_d_xx.md new file mode 100644 index 00000000000000..f8315b4cb85eea --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-scenario_tcr_data_cardiffnlp_tweet_sentiment_multilingual_all_d_xx.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Multilingual scenario_tcr_data_cardiffnlp_tweet_sentiment_multilingual_all_d XlmRoBertaForSequenceClassification from haryoaw +author: John Snow Labs +name: scenario_tcr_data_cardiffnlp_tweet_sentiment_multilingual_all_d +date: 2024-09-08 +tags: [xx, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`scenario_tcr_data_cardiffnlp_tweet_sentiment_multilingual_all_d` is a Multilingual model originally trained by haryoaw. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/scenario_tcr_data_cardiffnlp_tweet_sentiment_multilingual_all_d_xx_5.5.0_3.0_1725800170758.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/scenario_tcr_data_cardiffnlp_tweet_sentiment_multilingual_all_d_xx_5.5.0_3.0_1725800170758.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("scenario_tcr_data_cardiffnlp_tweet_sentiment_multilingual_all_d","xx") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("scenario_tcr_data_cardiffnlp_tweet_sentiment_multilingual_all_d", "xx") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|scenario_tcr_data_cardiffnlp_tweet_sentiment_multilingual_all_d| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|xx| +|Size:|836.5 MB| + +## References + +https://huggingface.co/haryoaw/scenario-TCR_data-cardiffnlp_tweet_sentiment_multilingual_all_d \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-sdgbert_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-sdgbert_pipeline_en.md new file mode 100644 index 00000000000000..da3f08acf58792 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-sdgbert_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English sdgbert_pipeline pipeline BertForSequenceClassification from sadickam +author: John Snow Labs +name: sdgbert_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sdgbert_pipeline` is a English model originally trained by sadickam. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sdgbert_pipeline_en_5.5.0_3.0_1725761510401.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sdgbert_pipeline_en_5.5.0_3.0_1725761510401.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sdgbert_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sdgbert_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sdgbert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|409.5 MB| + +## References + +https://huggingface.co/sadickam/sdgBERT + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-sent_bert_base_arabic_camelbert_msa_sixteenth_ar.md b/docs/_posts/ahmedlone127/2024-09-08-sent_bert_base_arabic_camelbert_msa_sixteenth_ar.md new file mode 100644 index 00000000000000..c714a1594e2699 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-sent_bert_base_arabic_camelbert_msa_sixteenth_ar.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Arabic sent_bert_base_arabic_camelbert_msa_sixteenth BertSentenceEmbeddings from CAMeL-Lab +author: John Snow Labs +name: sent_bert_base_arabic_camelbert_msa_sixteenth +date: 2024-09-08 +tags: [ar, open_source, onnx, sentence_embeddings, bert] +task: Embeddings +language: ar +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertSentenceEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertSentenceEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sent_bert_base_arabic_camelbert_msa_sixteenth` is a Arabic model originally trained by CAMeL-Lab. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sent_bert_base_arabic_camelbert_msa_sixteenth_ar_5.5.0_3.0_1725790839713.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sent_bert_base_arabic_camelbert_msa_sixteenth_ar_5.5.0_3.0_1725790839713.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("sentence") + +embeddings = BertSentenceEmbeddings.pretrained("sent_bert_base_arabic_camelbert_msa_sixteenth","ar") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = BertSentenceEmbeddings.pretrained("sent_bert_base_arabic_camelbert_msa_sixteenth","ar") + .setInputCols(Array("sentence")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sent_bert_base_arabic_camelbert_msa_sixteenth| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentence]| +|Output Labels:|[embeddings]| +|Language:|ar| +|Size:|406.4 MB| + +## References + +https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-sent_bert_base_uncased_google_bert_en.md b/docs/_posts/ahmedlone127/2024-09-08-sent_bert_base_uncased_google_bert_en.md new file mode 100644 index 00000000000000..502b437ca28a9f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-sent_bert_base_uncased_google_bert_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sent_bert_base_uncased_google_bert BertSentenceEmbeddings from google-bert +author: John Snow Labs +name: sent_bert_base_uncased_google_bert +date: 2024-09-08 +tags: [en, open_source, onnx, sentence_embeddings, bert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertSentenceEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertSentenceEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sent_bert_base_uncased_google_bert` is a English model originally trained by google-bert. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sent_bert_base_uncased_google_bert_en_5.5.0_3.0_1725790843823.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sent_bert_base_uncased_google_bert_en_5.5.0_3.0_1725790843823.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("sentence") + +embeddings = BertSentenceEmbeddings.pretrained("sent_bert_base_uncased_google_bert","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = BertSentenceEmbeddings.pretrained("sent_bert_base_uncased_google_bert","en") + .setInputCols(Array("sentence")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sent_bert_base_uncased_google_bert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentence]| +|Output Labels:|[embeddings]| +|Language:|en| +|Size:|407.2 MB| + +## References + +https://huggingface.co/google-bert/bert-base-uncased \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-sent_newsbertje_base_en.md b/docs/_posts/ahmedlone127/2024-09-08-sent_newsbertje_base_en.md new file mode 100644 index 00000000000000..7b7d13cff05094 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-sent_newsbertje_base_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sent_newsbertje_base BertSentenceEmbeddings from LoicDL +author: John Snow Labs +name: sent_newsbertje_base +date: 2024-09-08 +tags: [en, open_source, onnx, sentence_embeddings, bert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertSentenceEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertSentenceEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sent_newsbertje_base` is a English model originally trained by LoicDL. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sent_newsbertje_base_en_5.5.0_3.0_1725791297081.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sent_newsbertje_base_en_5.5.0_3.0_1725791297081.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("sentence") + +embeddings = BertSentenceEmbeddings.pretrained("sent_newsbertje_base","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = BertSentenceEmbeddings.pretrained("sent_newsbertje_base","en") + .setInputCols(Array("sentence")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sent_newsbertje_base| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentence]| +|Output Labels:|[embeddings]| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/LoicDL/NewsBERTje-base \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-sentencepiecebpe_nachos_french_morphemes_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-sentencepiecebpe_nachos_french_morphemes_pipeline_en.md new file mode 100644 index 00000000000000..c35f8519c99459 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-sentencepiecebpe_nachos_french_morphemes_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English sentencepiecebpe_nachos_french_morphemes_pipeline pipeline CamemBertEmbeddings from BioMedTok +author: John Snow Labs +name: sentencepiecebpe_nachos_french_morphemes_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sentencepiecebpe_nachos_french_morphemes_pipeline` is a English model originally trained by BioMedTok. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sentencepiecebpe_nachos_french_morphemes_pipeline_en_5.5.0_3.0_1725836147088.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sentencepiecebpe_nachos_french_morphemes_pipeline_en_5.5.0_3.0_1725836147088.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sentencepiecebpe_nachos_french_morphemes_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sentencepiecebpe_nachos_french_morphemes_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sentencepiecebpe_nachos_french_morphemes_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|412.6 MB| + +## References + +https://huggingface.co/BioMedTok/SentencePieceBPE-NACHOS-FR-Morphemes + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-sentiment_analysis_ninja_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-sentiment_analysis_ninja_pipeline_en.md new file mode 100644 index 00000000000000..4ec993c5c31234 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-sentiment_analysis_ninja_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English sentiment_analysis_ninja_pipeline pipeline BertForSequenceClassification from ninja +author: John Snow Labs +name: sentiment_analysis_ninja_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sentiment_analysis_ninja_pipeline` is a English model originally trained by ninja. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sentiment_analysis_ninja_pipeline_en_5.5.0_3.0_1725825746663.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sentiment_analysis_ninja_pipeline_en_5.5.0_3.0_1725825746663.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sentiment_analysis_ninja_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sentiment_analysis_ninja_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sentiment_analysis_ninja_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/ninja/Sentiment_Analysis + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-setfit_model_feb11_misinformation_on_media_traditional_social_en.md b/docs/_posts/ahmedlone127/2024-09-08-setfit_model_feb11_misinformation_on_media_traditional_social_en.md new file mode 100644 index 00000000000000..3a045092c58deb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-setfit_model_feb11_misinformation_on_media_traditional_social_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English setfit_model_feb11_misinformation_on_media_traditional_social MPNetEmbeddings from mitra-mir +author: John Snow Labs +name: setfit_model_feb11_misinformation_on_media_traditional_social +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`setfit_model_feb11_misinformation_on_media_traditional_social` is a English model originally trained by mitra-mir. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/setfit_model_feb11_misinformation_on_media_traditional_social_en_5.5.0_3.0_1725817520310.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/setfit_model_feb11_misinformation_on_media_traditional_social_en_5.5.0_3.0_1725817520310.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("setfit_model_feb11_misinformation_on_media_traditional_social","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("setfit_model_feb11_misinformation_on_media_traditional_social","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|setfit_model_feb11_misinformation_on_media_traditional_social| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/mitra-mir/setfit-model-Feb11-Misinformation-on-Media-Traditional-Social \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-setfit_trainer_en.md b/docs/_posts/ahmedlone127/2024-09-08-setfit_trainer_en.md new file mode 100644 index 00000000000000..d513db39bbd701 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-setfit_trainer_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English setfit_trainer MPNetEmbeddings from Linco +author: John Snow Labs +name: setfit_trainer +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`setfit_trainer` is a English model originally trained by Linco. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/setfit_trainer_en_5.5.0_3.0_1725815710484.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/setfit_trainer_en_5.5.0_3.0_1725815710484.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("setfit_trainer","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("setfit_trainer","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|setfit_trainer| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.2 MB| + +## References + +https://huggingface.co/Linco/setfit-trainer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-setfit_trainer_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-setfit_trainer_pipeline_en.md new file mode 100644 index 00000000000000..3d60bdb8b4a2b5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-setfit_trainer_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English setfit_trainer_pipeline pipeline MPNetEmbeddings from Linco +author: John Snow Labs +name: setfit_trainer_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`setfit_trainer_pipeline` is a English model originally trained by Linco. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/setfit_trainer_pipeline_en_5.5.0_3.0_1725815731199.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/setfit_trainer_pipeline_en_5.5.0_3.0_1725815731199.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("setfit_trainer_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("setfit_trainer_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|setfit_trainer_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|407.2 MB| + +## References + +https://huggingface.co/Linco/setfit-trainer + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-sota_4_en.md b/docs/_posts/ahmedlone127/2024-09-08-sota_4_en.md new file mode 100644 index 00000000000000..977a787b8fa922 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-sota_4_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sota_4 RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: sota_4 +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sota_4` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sota_4_en_5.5.0_3.0_1725778942250.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sota_4_en_5.5.0_3.0_1725778942250.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("sota_4","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("sota_4", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sota_4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/SOTA_4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-sourceresearchz_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-sourceresearchz_pipeline_en.md new file mode 100644 index 00000000000000..13df3d883ee46a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-sourceresearchz_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English sourceresearchz_pipeline pipeline DistilBertEmbeddings from srz30 +author: John Snow Labs +name: sourceresearchz_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sourceresearchz_pipeline` is a English model originally trained by srz30. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sourceresearchz_pipeline_en_5.5.0_3.0_1725776473935.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sourceresearchz_pipeline_en_5.5.0_3.0_1725776473935.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sourceresearchz_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sourceresearchz_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sourceresearchz_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/srz30/SourceResearchZ + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-southern_sotho_all_mpnet_finetuned_comb_6000_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-southern_sotho_all_mpnet_finetuned_comb_6000_pipeline_en.md new file mode 100644 index 00000000000000..0b1e4551ae5cee --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-southern_sotho_all_mpnet_finetuned_comb_6000_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English southern_sotho_all_mpnet_finetuned_comb_6000_pipeline pipeline MPNetEmbeddings from danfeg +author: John Snow Labs +name: southern_sotho_all_mpnet_finetuned_comb_6000_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`southern_sotho_all_mpnet_finetuned_comb_6000_pipeline` is a English model originally trained by danfeg. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/southern_sotho_all_mpnet_finetuned_comb_6000_pipeline_en_5.5.0_3.0_1725769219660.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/southern_sotho_all_mpnet_finetuned_comb_6000_pipeline_en_5.5.0_3.0_1725769219660.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("southern_sotho_all_mpnet_finetuned_comb_6000_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("southern_sotho_all_mpnet_finetuned_comb_6000_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|southern_sotho_all_mpnet_finetuned_comb_6000_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/danfeg/ST-ALL-MPNET_Finetuned-COMB-6000 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-southern_sotho_english_50_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-southern_sotho_english_50_pipeline_en.md new file mode 100644 index 00000000000000..1860b0fca1785f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-southern_sotho_english_50_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English southern_sotho_english_50_pipeline pipeline MarianTransformer from cw1521 +author: John Snow Labs +name: southern_sotho_english_50_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`southern_sotho_english_50_pipeline` is a English model originally trained by cw1521. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/southern_sotho_english_50_pipeline_en_5.5.0_3.0_1725795539545.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/southern_sotho_english_50_pipeline_en_5.5.0_3.0_1725795539545.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("southern_sotho_english_50_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("southern_sotho_english_50_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|southern_sotho_english_50_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|509.1 MB| + +## References + +https://huggingface.co/cw1521/st-en-50 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-spanish_finnish_all_copy_quy_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-spanish_finnish_all_copy_quy_pipeline_en.md new file mode 100644 index 00000000000000..859b80ec4052ab --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-spanish_finnish_all_copy_quy_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English spanish_finnish_all_copy_quy_pipeline pipeline MarianTransformer from nouman-10 +author: John Snow Labs +name: spanish_finnish_all_copy_quy_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`spanish_finnish_all_copy_quy_pipeline` is a English model originally trained by nouman-10. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/spanish_finnish_all_copy_quy_pipeline_en_5.5.0_3.0_1725824980955.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/spanish_finnish_all_copy_quy_pipeline_en_5.5.0_3.0_1725824980955.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("spanish_finnish_all_copy_quy_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("spanish_finnish_all_copy_quy_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|spanish_finnish_all_copy_quy_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|531.3 MB| + +## References + +https://huggingface.co/nouman-10/es_fi_all_copy_quy + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-squad_qa_model_portuguese_en.md b/docs/_posts/ahmedlone127/2024-09-08-squad_qa_model_portuguese_en.md new file mode 100644 index 00000000000000..87b9dd1370332c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-squad_qa_model_portuguese_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English squad_qa_model_portuguese DistilBertForQuestionAnswering from Mrsteveme +author: John Snow Labs +name: squad_qa_model_portuguese +date: 2024-09-08 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`squad_qa_model_portuguese` is a English model originally trained by Mrsteveme. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/squad_qa_model_portuguese_en_5.5.0_3.0_1725798538295.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/squad_qa_model_portuguese_en_5.5.0_3.0_1725798538295.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("squad_qa_model_portuguese","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("squad_qa_model_portuguese", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|squad_qa_model_portuguese| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Mrsteveme/SQuAD_qa_model_PT \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-sst_mobilebert_uncased_en.md b/docs/_posts/ahmedlone127/2024-09-08-sst_mobilebert_uncased_en.md new file mode 100644 index 00000000000000..89937a28d50cf0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-sst_mobilebert_uncased_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sst_mobilebert_uncased BertForSequenceClassification from cambridgeltl +author: John Snow Labs +name: sst_mobilebert_uncased +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sst_mobilebert_uncased` is a English model originally trained by cambridgeltl. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sst_mobilebert_uncased_en_5.5.0_3.0_1725838747059.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sst_mobilebert_uncased_en_5.5.0_3.0_1725838747059.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("sst_mobilebert_uncased","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("sst_mobilebert_uncased", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sst_mobilebert_uncased| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|92.5 MB| + +## References + +https://huggingface.co/cambridgeltl/sst_mobilebert-uncased \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-stego_classifier_checkpoint_epoch_80_2024_07_26_16_03_28_en.md b/docs/_posts/ahmedlone127/2024-09-08-stego_classifier_checkpoint_epoch_80_2024_07_26_16_03_28_en.md new file mode 100644 index 00000000000000..8fd40d962d65b9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-stego_classifier_checkpoint_epoch_80_2024_07_26_16_03_28_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English stego_classifier_checkpoint_epoch_80_2024_07_26_16_03_28 DistilBertForSequenceClassification from jvelja +author: John Snow Labs +name: stego_classifier_checkpoint_epoch_80_2024_07_26_16_03_28 +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`stego_classifier_checkpoint_epoch_80_2024_07_26_16_03_28` is a English model originally trained by jvelja. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/stego_classifier_checkpoint_epoch_80_2024_07_26_16_03_28_en_5.5.0_3.0_1725775006225.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/stego_classifier_checkpoint_epoch_80_2024_07_26_16_03_28_en_5.5.0_3.0_1725775006225.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("stego_classifier_checkpoint_epoch_80_2024_07_26_16_03_28","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("stego_classifier_checkpoint_epoch_80_2024_07_26_16_03_28", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|stego_classifier_checkpoint_epoch_80_2024_07_26_16_03_28| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/jvelja/stego-classifier-checkpoint-epoch-80-2024-07-26_16-03-28 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-student_marian_english_romanian_6_3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-student_marian_english_romanian_6_3_pipeline_en.md new file mode 100644 index 00000000000000..0e3dc64a51047c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-student_marian_english_romanian_6_3_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English student_marian_english_romanian_6_3_pipeline pipeline MarianTransformer from sshleifer +author: John Snow Labs +name: student_marian_english_romanian_6_3_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`student_marian_english_romanian_6_3_pipeline` is a English model originally trained by sshleifer. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/student_marian_english_romanian_6_3_pipeline_en_5.5.0_3.0_1725795902705.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/student_marian_english_romanian_6_3_pipeline_en_5.5.0_3.0_1725795902705.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("student_marian_english_romanian_6_3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("student_marian_english_romanian_6_3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|student_marian_english_romanian_6_3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|292.1 MB| + +## References + +https://huggingface.co/sshleifer/student_marian_en_ro_6_3 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-tatar_oilwells_demo_model_en.md b/docs/_posts/ahmedlone127/2024-09-08-tatar_oilwells_demo_model_en.md new file mode 100644 index 00000000000000..89c88cafa82663 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-tatar_oilwells_demo_model_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English tatar_oilwells_demo_model DistilBertEmbeddings from Imvignesh +author: John Snow Labs +name: tatar_oilwells_demo_model +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`tatar_oilwells_demo_model` is a English model originally trained by Imvignesh. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/tatar_oilwells_demo_model_en_5.5.0_3.0_1725776267664.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/tatar_oilwells_demo_model_en_5.5.0_3.0_1725776267664.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("tatar_oilwells_demo_model","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("tatar_oilwells_demo_model","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|tatar_oilwells_demo_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Imvignesh/tt-oilwells-demo-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-test_en.md b/docs/_posts/ahmedlone127/2024-09-08-test_en.md new file mode 100644 index 00000000000000..4315498e815fe8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-test_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English test MPNetEmbeddings from sheaDurgin +author: John Snow Labs +name: test +date: 2024-09-08 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`test` is a English model originally trained by sheaDurgin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/test_en_5.5.0_3.0_1725817125544.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/test_en_5.5.0_3.0_1725817125544.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("test","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("test","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|test| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.2 MB| + +## References + +https://huggingface.co/sheaDurgin/test \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-test_hf_distil_bert_toxic_df_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-test_hf_distil_bert_toxic_df_pipeline_en.md new file mode 100644 index 00000000000000..bcfaa2ce7eadee --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-test_hf_distil_bert_toxic_df_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English test_hf_distil_bert_toxic_df_pipeline pipeline DistilBertForSequenceClassification from AgneyPraseed +author: John Snow Labs +name: test_hf_distil_bert_toxic_df_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`test_hf_distil_bert_toxic_df_pipeline` is a English model originally trained by AgneyPraseed. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/test_hf_distil_bert_toxic_df_pipeline_en_5.5.0_3.0_1725808406806.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/test_hf_distil_bert_toxic_df_pipeline_en_5.5.0_3.0_1725808406806.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("test_hf_distil_bert_toxic_df_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("test_hf_distil_bert_toxic_df_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|test_hf_distil_bert_toxic_df_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/AgneyPraseed/test_hf_distil_bert_toxic_df + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-test_model_lori0330_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-test_model_lori0330_pipeline_en.md new file mode 100644 index 00000000000000..225548e2977207 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-test_model_lori0330_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English test_model_lori0330_pipeline pipeline DistilBertForSequenceClassification from lori0330 +author: John Snow Labs +name: test_model_lori0330_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`test_model_lori0330_pipeline` is a English model originally trained by lori0330. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/test_model_lori0330_pipeline_en_5.5.0_3.0_1725809019075.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/test_model_lori0330_pipeline_en_5.5.0_3.0_1725809019075.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("test_model_lori0330_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("test_model_lori0330_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|test_model_lori0330_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/lori0330/test-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-testing1_anni000_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-testing1_anni000_pipeline_en.md new file mode 100644 index 00000000000000..52a37c00b18ec3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-testing1_anni000_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English testing1_anni000_pipeline pipeline DistilBertForSequenceClassification from Anni000 +author: John Snow Labs +name: testing1_anni000_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`testing1_anni000_pipeline` is a English model originally trained by Anni000. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/testing1_anni000_pipeline_en_5.5.0_3.0_1725808628500.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/testing1_anni000_pipeline_en_5.5.0_3.0_1725808628500.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("testing1_anni000_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("testing1_anni000_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|testing1_anni000_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/Anni000/testing1 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-tipo_campanya_ong_v3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-tipo_campanya_ong_v3_pipeline_en.md new file mode 100644 index 00000000000000..071b225880e21d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-tipo_campanya_ong_v3_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English tipo_campanya_ong_v3_pipeline pipeline MPNetEmbeddings from api19750904 +author: John Snow Labs +name: tipo_campanya_ong_v3_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`tipo_campanya_ong_v3_pipeline` is a English model originally trained by api19750904. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/tipo_campanya_ong_v3_pipeline_en_5.5.0_3.0_1725816735612.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/tipo_campanya_ong_v3_pipeline_en_5.5.0_3.0_1725816735612.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("tipo_campanya_ong_v3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("tipo_campanya_ong_v3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|tipo_campanya_ong_v3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|407.2 MB| + +## References + +https://huggingface.co/api19750904/tipo_campanya_ong_v3 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-token_classification_model_vishnun0027_en.md b/docs/_posts/ahmedlone127/2024-09-08-token_classification_model_vishnun0027_en.md new file mode 100644 index 00000000000000..79e2637d831af8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-token_classification_model_vishnun0027_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English token_classification_model_vishnun0027 DistilBertForTokenClassification from vishnun0027 +author: John Snow Labs +name: token_classification_model_vishnun0027 +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`token_classification_model_vishnun0027` is a English model originally trained by vishnun0027. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/token_classification_model_vishnun0027_en_5.5.0_3.0_1725788375101.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/token_classification_model_vishnun0027_en_5.5.0_3.0_1725788375101.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("token_classification_model_vishnun0027","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("token_classification_model_vishnun0027", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|token_classification_model_vishnun0027| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/vishnun0027/token_classification_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-trainer1f_en.md b/docs/_posts/ahmedlone127/2024-09-08-trainer1f_en.md new file mode 100644 index 00000000000000..2fbdc064c75065 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-trainer1f_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English trainer1f DistilBertForSequenceClassification from SimoneJLaudani +author: John Snow Labs +name: trainer1f +date: 2024-09-08 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`trainer1f` is a English model originally trained by SimoneJLaudani. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/trainer1f_en_5.5.0_3.0_1725777195053.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/trainer1f_en_5.5.0_3.0_1725777195053.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("trainer1f","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("trainer1f", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|trainer1f| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/SimoneJLaudani/trainer1F \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-translation_not_evaluated_en.md b/docs/_posts/ahmedlone127/2024-09-08-translation_not_evaluated_en.md new file mode 100644 index 00000000000000..5103c4bceab534 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-translation_not_evaluated_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English translation_not_evaluated MarianTransformer from autoevaluate +author: John Snow Labs +name: translation_not_evaluated +date: 2024-09-08 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`translation_not_evaluated` is a English model originally trained by autoevaluate. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/translation_not_evaluated_en_5.5.0_3.0_1725766500409.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/translation_not_evaluated_en_5.5.0_3.0_1725766500409.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("translation_not_evaluated","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("translation_not_evaluated","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|translation_not_evaluated| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.6 MB| + +## References + +https://huggingface.co/autoevaluate/translation-not-evaluated \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-translation_not_evaluated_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-translation_not_evaluated_pipeline_en.md new file mode 100644 index 00000000000000..67119eeb0798ee --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-translation_not_evaluated_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English translation_not_evaluated_pipeline pipeline MarianTransformer from autoevaluate +author: John Snow Labs +name: translation_not_evaluated_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`translation_not_evaluated_pipeline` is a English model originally trained by autoevaluate. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/translation_not_evaluated_pipeline_en_5.5.0_3.0_1725766525586.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/translation_not_evaluated_pipeline_en_5.5.0_3.0_1725766525586.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("translation_not_evaluated_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("translation_not_evaluated_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|translation_not_evaluated_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|509.1 MB| + +## References + +https://huggingface.co/autoevaluate/translation-not-evaluated + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-wire_clustering_na_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-wire_clustering_na_pipeline_en.md new file mode 100644 index 00000000000000..a1518fbaf2bfcf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-wire_clustering_na_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English wire_clustering_na_pipeline pipeline MPNetEmbeddings from dell-research-harvard +author: John Snow Labs +name: wire_clustering_na_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`wire_clustering_na_pipeline` is a English model originally trained by dell-research-harvard. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/wire_clustering_na_pipeline_en_5.5.0_3.0_1725817276515.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/wire_clustering_na_pipeline_en_5.5.0_3.0_1725817276515.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("wire_clustering_na_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("wire_clustering_na_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|wire_clustering_na_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|407.3 MB| + +## References + +https://huggingface.co/dell-research-harvard/wire-clustering-na + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_final_mixed_aug_insert_bert_2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_final_mixed_aug_insert_bert_2_pipeline_en.md new file mode 100644 index 00000000000000..085e690cc74276 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_final_mixed_aug_insert_bert_2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_final_mixed_aug_insert_bert_2_pipeline pipeline XlmRoBertaForSequenceClassification from ThuyNT03 +author: John Snow Labs +name: xlm_roberta_base_final_mixed_aug_insert_bert_2_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_final_mixed_aug_insert_bert_2_pipeline` is a English model originally trained by ThuyNT03. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_final_mixed_aug_insert_bert_2_pipeline_en_5.5.0_3.0_1725780728467.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_final_mixed_aug_insert_bert_2_pipeline_en_5.5.0_3.0_1725780728467.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_final_mixed_aug_insert_bert_2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_final_mixed_aug_insert_bert_2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_final_mixed_aug_insert_bert_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|795.4 MB| + +## References + +https://huggingface.co/ThuyNT03/xlm-roberta-base-Final_Mixed-aug_insert_BERT-2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_finetuned_panx_english_cyycyy_en.md b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_finetuned_panx_english_cyycyy_en.md new file mode 100644 index 00000000000000..62e31afcfe4cdc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_finetuned_panx_english_cyycyy_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_english_cyycyy XlmRoBertaForTokenClassification from cyycyy +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_english_cyycyy +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_english_cyycyy` is a English model originally trained by cyycyy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_english_cyycyy_en_5.5.0_3.0_1725783839649.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_english_cyycyy_en_5.5.0_3.0_1725783839649.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_english_cyycyy","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_english_cyycyy", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_english_cyycyy| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|826.4 MB| + +## References + +https://huggingface.co/cyycyy/xlm-roberta-base-finetuned-panx-en \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_finetuned_panx_english_hbtemari_en.md b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_finetuned_panx_english_hbtemari_en.md new file mode 100644 index 00000000000000..9ead9a2645a047 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_finetuned_panx_english_hbtemari_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_english_hbtemari XlmRoBertaForTokenClassification from HBtemari +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_english_hbtemari +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_english_hbtemari` is a English model originally trained by HBtemari. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_english_hbtemari_en_5.5.0_3.0_1725807049023.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_english_hbtemari_en_5.5.0_3.0_1725807049023.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_english_hbtemari","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_english_hbtemari", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_english_hbtemari| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|826.4 MB| + +## References + +https://huggingface.co/HBtemari/xlm-roberta-base-finetuned-panx-en \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_finetuned_panx_french_yezune_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_finetuned_panx_french_yezune_pipeline_en.md new file mode 100644 index 00000000000000..f08aa9ed8eb25d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_finetuned_panx_french_yezune_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_yezune_pipeline pipeline XlmRoBertaForTokenClassification from yezune +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_yezune_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_yezune_pipeline` is a English model originally trained by yezune. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_yezune_pipeline_en_5.5.0_3.0_1725783758008.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_yezune_pipeline_en_5.5.0_3.0_1725783758008.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_french_yezune_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_french_yezune_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_yezune_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|840.9 MB| + +## References + +https://huggingface.co/yezune/xlm-roberta-base-finetuned-panx-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_finetuned_panx_german_french_jjglilleberg_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_finetuned_panx_german_french_jjglilleberg_pipeline_en.md new file mode 100644 index 00000000000000..02d8af74246e59 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_finetuned_panx_german_french_jjglilleberg_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_jjglilleberg_pipeline pipeline XlmRoBertaForTokenClassification from jjglilleberg +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_jjglilleberg_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_jjglilleberg_pipeline` is a English model originally trained by jjglilleberg. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_jjglilleberg_pipeline_en_5.5.0_3.0_1725785095881.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_jjglilleberg_pipeline_en_5.5.0_3.0_1725785095881.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_jjglilleberg_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_jjglilleberg_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_jjglilleberg_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|843.4 MB| + +## References + +https://huggingface.co/jjglilleberg/xlm-roberta-base-finetuned-panx-de-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_lcc_english_persian_farsi_2e_5_42_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_lcc_english_persian_farsi_2e_5_42_pipeline_en.md new file mode 100644 index 00000000000000..e737ee53fb1928 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_lcc_english_persian_farsi_2e_5_42_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_lcc_english_persian_farsi_2e_5_42_pipeline pipeline XlmRoBertaForSequenceClassification from EhsanAghazadeh +author: John Snow Labs +name: xlm_roberta_base_lcc_english_persian_farsi_2e_5_42_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_lcc_english_persian_farsi_2e_5_42_pipeline` is a English model originally trained by EhsanAghazadeh. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_lcc_english_persian_farsi_2e_5_42_pipeline_en_5.5.0_3.0_1725799543432.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_lcc_english_persian_farsi_2e_5_42_pipeline_en_5.5.0_3.0_1725799543432.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_lcc_english_persian_farsi_2e_5_42_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_lcc_english_persian_farsi_2e_5_42_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_lcc_english_persian_farsi_2e_5_42_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|818.6 MB| + +## References + +https://huggingface.co/EhsanAghazadeh/xlm-roberta-base-lcc-en-fa-2e-5-42 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_sentiment_multilingual_xx.md b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_sentiment_multilingual_xx.md new file mode 100644 index 00000000000000..31cdf2aaa0cbe1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_sentiment_multilingual_xx.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Multilingual xlm_roberta_base_sentiment_multilingual XlmRoBertaForSequenceClassification from cardiffnlp +author: John Snow Labs +name: xlm_roberta_base_sentiment_multilingual +date: 2024-09-08 +tags: [xx, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_sentiment_multilingual` is a Multilingual model originally trained by cardiffnlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_sentiment_multilingual_xx_5.5.0_3.0_1725780805034.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_sentiment_multilingual_xx_5.5.0_3.0_1725780805034.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_base_sentiment_multilingual","xx") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_base_sentiment_multilingual", "xx") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_sentiment_multilingual| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|xx| +|Size:|820.0 MB| + +## References + +https://huggingface.co/cardiffnlp/xlm-roberta-base-sentiment-multilingual \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_tweet_sentiment_english_trimmed_english_60000_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_tweet_sentiment_english_trimmed_english_60000_pipeline_en.md new file mode 100644 index 00000000000000..fad63eed7088d0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-xlm_roberta_base_tweet_sentiment_english_trimmed_english_60000_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_tweet_sentiment_english_trimmed_english_60000_pipeline pipeline XlmRoBertaForSequenceClassification from vocabtrimmer +author: John Snow Labs +name: xlm_roberta_base_tweet_sentiment_english_trimmed_english_60000_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_tweet_sentiment_english_trimmed_english_60000_pipeline` is a English model originally trained by vocabtrimmer. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_tweet_sentiment_english_trimmed_english_60000_pipeline_en_5.5.0_3.0_1725780429424.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_tweet_sentiment_english_trimmed_english_60000_pipeline_en_5.5.0_3.0_1725780429424.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_tweet_sentiment_english_trimmed_english_60000_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_tweet_sentiment_english_trimmed_english_60000_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_tweet_sentiment_english_trimmed_english_60000_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|444.6 MB| + +## References + +https://huggingface.co/vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-60000 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-xlmr_base_trained_panx_japanese_en.md b/docs/_posts/ahmedlone127/2024-09-08-xlmr_base_trained_panx_japanese_en.md new file mode 100644 index 00000000000000..def30ab2d7bbff --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-xlmr_base_trained_panx_japanese_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlmr_base_trained_panx_japanese XlmRoBertaForTokenClassification from DeepaPeri +author: John Snow Labs +name: xlmr_base_trained_panx_japanese +date: 2024-09-08 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlmr_base_trained_panx_japanese` is a English model originally trained by DeepaPeri. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlmr_base_trained_panx_japanese_en_5.5.0_3.0_1725784016440.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlmr_base_trained_panx_japanese_en_5.5.0_3.0_1725784016440.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlmr_base_trained_panx_japanese","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlmr_base_trained_panx_japanese", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlmr_base_trained_panx_japanese| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|789.7 MB| + +## References + +https://huggingface.co/DeepaPeri/XLMR-BASE-TRAINED-PANX-ja \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-xtremedistil_l12_h384_uncased_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-08-xtremedistil_l12_h384_uncased_pipeline_en.md new file mode 100644 index 00000000000000..2cbb1929f80951 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-xtremedistil_l12_h384_uncased_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xtremedistil_l12_h384_uncased_pipeline pipeline BertForSequenceClassification from microsoft +author: John Snow Labs +name: xtremedistil_l12_h384_uncased_pipeline +date: 2024-09-08 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xtremedistil_l12_h384_uncased_pipeline` is a English model originally trained by microsoft. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xtremedistil_l12_h384_uncased_pipeline_en_5.5.0_3.0_1725801986030.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xtremedistil_l12_h384_uncased_pipeline_en_5.5.0_3.0_1725801986030.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xtremedistil_l12_h384_uncased_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xtremedistil_l12_h384_uncased_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xtremedistil_l12_h384_uncased_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|124.2 MB| + +## References + +https://huggingface.co/microsoft/xtremedistil-l12-h384-uncased + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-08-yappy_fine_tuned_opus_maltese_russian_english_en.md b/docs/_posts/ahmedlone127/2024-09-08-yappy_fine_tuned_opus_maltese_russian_english_en.md new file mode 100644 index 00000000000000..bec6e2d96b42e9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-08-yappy_fine_tuned_opus_maltese_russian_english_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English yappy_fine_tuned_opus_maltese_russian_english MarianTransformer from lightsource +author: John Snow Labs +name: yappy_fine_tuned_opus_maltese_russian_english +date: 2024-09-08 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`yappy_fine_tuned_opus_maltese_russian_english` is a English model originally trained by lightsource. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/yappy_fine_tuned_opus_maltese_russian_english_en_5.5.0_3.0_1725824475536.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/yappy_fine_tuned_opus_maltese_russian_english_en_5.5.0_3.0_1725824475536.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("yappy_fine_tuned_opus_maltese_russian_english","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("yappy_fine_tuned_opus_maltese_russian_english","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|yappy_fine_tuned_opus_maltese_russian_english| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|526.4 MB| + +## References + +https://huggingface.co/lightsource/yappy-fine-tuned-opus-mt-ru-en \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-action_classifier_v2_en.md b/docs/_posts/ahmedlone127/2024-09-09-action_classifier_v2_en.md new file mode 100644 index 00000000000000..d19d6fc6df7949 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-action_classifier_v2_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English action_classifier_v2 MPNetEmbeddings from futuredatascience +author: John Snow Labs +name: action_classifier_v2 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`action_classifier_v2` is a English model originally trained by futuredatascience. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/action_classifier_v2_en_5.5.0_3.0_1725896688324.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/action_classifier_v2_en_5.5.0_3.0_1725896688324.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("action_classifier_v2","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("action_classifier_v2","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|action_classifier_v2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.0 MB| + +## References + +https://huggingface.co/futuredatascience/action-classifier-v2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-adaptationbert_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-adaptationbert_pipeline_en.md new file mode 100644 index 00000000000000..3aaa8e3777015a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-adaptationbert_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English adaptationbert_pipeline pipeline RoBertaForSequenceClassification from ClimateLouie +author: John Snow Labs +name: adaptationbert_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`adaptationbert_pipeline` is a English model originally trained by ClimateLouie. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/adaptationbert_pipeline_en_5.5.0_3.0_1725911688500.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/adaptationbert_pipeline_en_5.5.0_3.0_1725911688500.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("adaptationbert_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("adaptationbert_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|adaptationbert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.3 MB| + +## References + +https://huggingface.co/ClimateLouie/AdaptationBERT + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-afriberta_small_finetuned_pidgin_sentiment_ayour2_en.md b/docs/_posts/ahmedlone127/2024-09-09-afriberta_small_finetuned_pidgin_sentiment_ayour2_en.md new file mode 100644 index 00000000000000..ccabc3b95ec381 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-afriberta_small_finetuned_pidgin_sentiment_ayour2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English afriberta_small_finetuned_pidgin_sentiment_ayour2 XlmRoBertaForSequenceClassification from Tiamz +author: John Snow Labs +name: afriberta_small_finetuned_pidgin_sentiment_ayour2 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`afriberta_small_finetuned_pidgin_sentiment_ayour2` is a English model originally trained by Tiamz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/afriberta_small_finetuned_pidgin_sentiment_ayour2_en_5.5.0_3.0_1725906737205.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/afriberta_small_finetuned_pidgin_sentiment_ayour2_en_5.5.0_3.0_1725906737205.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("afriberta_small_finetuned_pidgin_sentiment_ayour2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("afriberta_small_finetuned_pidgin_sentiment_ayour2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|afriberta_small_finetuned_pidgin_sentiment_ayour2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|310.8 MB| + +## References + +https://huggingface.co/Tiamz/afriberta_small-finetuned-Pidgin-sentiment-ayour2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-afriberta_small_finetuned_pidgin_sentiment_ayour2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-afriberta_small_finetuned_pidgin_sentiment_ayour2_pipeline_en.md new file mode 100644 index 00000000000000..5ca45489467f8b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-afriberta_small_finetuned_pidgin_sentiment_ayour2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English afriberta_small_finetuned_pidgin_sentiment_ayour2_pipeline pipeline XlmRoBertaForSequenceClassification from Tiamz +author: John Snow Labs +name: afriberta_small_finetuned_pidgin_sentiment_ayour2_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`afriberta_small_finetuned_pidgin_sentiment_ayour2_pipeline` is a English model originally trained by Tiamz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/afriberta_small_finetuned_pidgin_sentiment_ayour2_pipeline_en_5.5.0_3.0_1725906752437.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/afriberta_small_finetuned_pidgin_sentiment_ayour2_pipeline_en_5.5.0_3.0_1725906752437.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("afriberta_small_finetuned_pidgin_sentiment_ayour2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("afriberta_small_finetuned_pidgin_sentiment_ayour2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|afriberta_small_finetuned_pidgin_sentiment_ayour2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|310.8 MB| + +## References + +https://huggingface.co/Tiamz/afriberta_small-finetuned-Pidgin-sentiment-ayour2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-albert_turkish_turkish_hotel_reviews_pipeline_tr.md b/docs/_posts/ahmedlone127/2024-09-09-albert_turkish_turkish_hotel_reviews_pipeline_tr.md new file mode 100644 index 00000000000000..674f54a446a0e4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-albert_turkish_turkish_hotel_reviews_pipeline_tr.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Turkish albert_turkish_turkish_hotel_reviews_pipeline pipeline AlbertForSequenceClassification from anilguven +author: John Snow Labs +name: albert_turkish_turkish_hotel_reviews_pipeline +date: 2024-09-09 +tags: [tr, open_source, pipeline, onnx] +task: Text Classification +language: tr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained AlbertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`albert_turkish_turkish_hotel_reviews_pipeline` is a Turkish model originally trained by anilguven. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/albert_turkish_turkish_hotel_reviews_pipeline_tr_5.5.0_3.0_1725854608578.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/albert_turkish_turkish_hotel_reviews_pipeline_tr_5.5.0_3.0_1725854608578.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("albert_turkish_turkish_hotel_reviews_pipeline", lang = "tr") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("albert_turkish_turkish_hotel_reviews_pipeline", lang = "tr") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|albert_turkish_turkish_hotel_reviews_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|tr| +|Size:|45.1 MB| + +## References + +https://huggingface.co/anilguven/albert_tr_turkish_hotel_reviews + +## Included Models + +- DocumentAssembler +- TokenizerModel +- AlbertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-albert_turkish_turkish_hotel_reviews_tr.md b/docs/_posts/ahmedlone127/2024-09-09-albert_turkish_turkish_hotel_reviews_tr.md new file mode 100644 index 00000000000000..ad6c74a0dfde72 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-albert_turkish_turkish_hotel_reviews_tr.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Turkish albert_turkish_turkish_hotel_reviews AlbertForSequenceClassification from anilguven +author: John Snow Labs +name: albert_turkish_turkish_hotel_reviews +date: 2024-09-09 +tags: [tr, open_source, onnx, sequence_classification, albert] +task: Text Classification +language: tr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: AlbertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained AlbertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`albert_turkish_turkish_hotel_reviews` is a Turkish model originally trained by anilguven. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/albert_turkish_turkish_hotel_reviews_tr_5.5.0_3.0_1725854606150.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/albert_turkish_turkish_hotel_reviews_tr_5.5.0_3.0_1725854606150.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = AlbertForSequenceClassification.pretrained("albert_turkish_turkish_hotel_reviews","tr") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = AlbertForSequenceClassification.pretrained("albert_turkish_turkish_hotel_reviews", "tr") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|albert_turkish_turkish_hotel_reviews| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|tr| +|Size:|45.1 MB| + +## References + +https://huggingface.co/anilguven/albert_tr_turkish_hotel_reviews \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-alberta_base_akadhim_ai_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-alberta_base_akadhim_ai_pipeline_en.md new file mode 100644 index 00000000000000..142285937a3e29 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-alberta_base_akadhim_ai_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English alberta_base_akadhim_ai_pipeline pipeline RoBertaForSequenceClassification from akadhim-ai +author: John Snow Labs +name: alberta_base_akadhim_ai_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`alberta_base_akadhim_ai_pipeline` is a English model originally trained by akadhim-ai. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/alberta_base_akadhim_ai_pipeline_en_5.5.0_3.0_1725902983404.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/alberta_base_akadhim_ai_pipeline_en_5.5.0_3.0_1725902983404.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("alberta_base_akadhim_ai_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("alberta_base_akadhim_ai_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|alberta_base_akadhim_ai_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|424.3 MB| + +## References + +https://huggingface.co/akadhim-ai/alberta_base + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_base_v2_2epoch_100pair_mar2_closs_prsn_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_base_v2_2epoch_100pair_mar2_closs_prsn_pipeline_en.md new file mode 100644 index 00000000000000..28e08e238d76e0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_base_v2_2epoch_100pair_mar2_closs_prsn_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English all_mpnet_base_v2_2epoch_100pair_mar2_closs_prsn_pipeline pipeline MPNetEmbeddings from ahessamb +author: John Snow Labs +name: all_mpnet_base_v2_2epoch_100pair_mar2_closs_prsn_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_base_v2_2epoch_100pair_mar2_closs_prsn_pipeline` is a English model originally trained by ahessamb. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_2epoch_100pair_mar2_closs_prsn_pipeline_en_5.5.0_3.0_1725896818897.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_2epoch_100pair_mar2_closs_prsn_pipeline_en_5.5.0_3.0_1725896818897.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("all_mpnet_base_v2_2epoch_100pair_mar2_closs_prsn_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("all_mpnet_base_v2_2epoch_100pair_mar2_closs_prsn_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_base_v2_2epoch_100pair_mar2_closs_prsn_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/ahessamb/all-mpnet-base-v2-2epoch-100pair-mar2-closs-prsn + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_base_v2_kunwooshin_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_base_v2_kunwooshin_pipeline_en.md new file mode 100644 index 00000000000000..e8f2d682adcc22 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_base_v2_kunwooshin_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English all_mpnet_base_v2_kunwooshin_pipeline pipeline MPNetEmbeddings from kunwooshin +author: John Snow Labs +name: all_mpnet_base_v2_kunwooshin_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_base_v2_kunwooshin_pipeline` is a English model originally trained by kunwooshin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_kunwooshin_pipeline_en_5.5.0_3.0_1725874540742.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_kunwooshin_pipeline_en_5.5.0_3.0_1725874540742.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("all_mpnet_base_v2_kunwooshin_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("all_mpnet_base_v2_kunwooshin_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_base_v2_kunwooshin_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/kunwooshin/all-mpnet-base-v2 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_base_v2_lr_1e_8_margin_1_epoch_3_en.md b/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_base_v2_lr_1e_8_margin_1_epoch_3_en.md new file mode 100644 index 00000000000000..7f389b125d72c6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_base_v2_lr_1e_8_margin_1_epoch_3_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English all_mpnet_base_v2_lr_1e_8_margin_1_epoch_3 MPNetEmbeddings from luiz-and-robert-thesis +author: John Snow Labs +name: all_mpnet_base_v2_lr_1e_8_margin_1_epoch_3 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_base_v2_lr_1e_8_margin_1_epoch_3` is a English model originally trained by luiz-and-robert-thesis. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_lr_1e_8_margin_1_epoch_3_en_5.5.0_3.0_1725874646722.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_lr_1e_8_margin_1_epoch_3_en_5.5.0_3.0_1725874646722.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("all_mpnet_base_v2_lr_1e_8_margin_1_epoch_3","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("all_mpnet_base_v2_lr_1e_8_margin_1_epoch_3","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_base_v2_lr_1e_8_margin_1_epoch_3| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/luiz-and-robert-thesis/all-mpnet-base-v2-lr-1e-8-margin-1-epoch-3 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_base_v2_margin_5_epoch_3_en.md b/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_base_v2_margin_5_epoch_3_en.md new file mode 100644 index 00000000000000..194f7653b0d3b9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_base_v2_margin_5_epoch_3_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English all_mpnet_base_v2_margin_5_epoch_3 MPNetEmbeddings from luiz-and-robert-thesis +author: John Snow Labs +name: all_mpnet_base_v2_margin_5_epoch_3 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_base_v2_margin_5_epoch_3` is a English model originally trained by luiz-and-robert-thesis. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_margin_5_epoch_3_en_5.5.0_3.0_1725875026101.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_margin_5_epoch_3_en_5.5.0_3.0_1725875026101.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("all_mpnet_base_v2_margin_5_epoch_3","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("all_mpnet_base_v2_margin_5_epoch_3","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_base_v2_margin_5_epoch_3| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/luiz-and-robert-thesis/all-mpnet-base-v2-margin-5-epoch-3 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_outcome_similarity_en.md b/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_outcome_similarity_en.md new file mode 100644 index 00000000000000..46c821b83f16af --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-all_mpnet_outcome_similarity_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English all_mpnet_outcome_similarity MPNetEmbeddings from laiking +author: John Snow Labs +name: all_mpnet_outcome_similarity +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_outcome_similarity` is a English model originally trained by laiking. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_outcome_similarity_en_5.5.0_3.0_1725896406048.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_outcome_similarity_en_5.5.0_3.0_1725896406048.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("all_mpnet_outcome_similarity","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("all_mpnet_outcome_similarity","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_outcome_similarity| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/laiking/all-mpnet-outcome-similarity \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-all_roberta_large_v1_meta_1000_16_5_oos_en.md b/docs/_posts/ahmedlone127/2024-09-09-all_roberta_large_v1_meta_1000_16_5_oos_en.md new file mode 100644 index 00000000000000..670e275f189643 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-all_roberta_large_v1_meta_1000_16_5_oos_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English all_roberta_large_v1_meta_1000_16_5_oos RoBertaForSequenceClassification from fathyshalab +author: John Snow Labs +name: all_roberta_large_v1_meta_1000_16_5_oos +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_roberta_large_v1_meta_1000_16_5_oos` is a English model originally trained by fathyshalab. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_roberta_large_v1_meta_1000_16_5_oos_en_5.5.0_3.0_1725912234526.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_roberta_large_v1_meta_1000_16_5_oos_en_5.5.0_3.0_1725912234526.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("all_roberta_large_v1_meta_1000_16_5_oos","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("all_roberta_large_v1_meta_1000_16_5_oos", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_roberta_large_v1_meta_1000_16_5_oos| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/fathyshalab/all-roberta-large-v1-meta-1000-16-5-oos \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-all_roberta_large_v1_work_1_16_5_en.md b/docs/_posts/ahmedlone127/2024-09-09-all_roberta_large_v1_work_1_16_5_en.md new file mode 100644 index 00000000000000..e6996efc053d42 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-all_roberta_large_v1_work_1_16_5_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English all_roberta_large_v1_work_1_16_5 RoBertaForSequenceClassification from fathyshalab +author: John Snow Labs +name: all_roberta_large_v1_work_1_16_5 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_roberta_large_v1_work_1_16_5` is a English model originally trained by fathyshalab. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_roberta_large_v1_work_1_16_5_en_5.5.0_3.0_1725920964543.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_roberta_large_v1_work_1_16_5_en_5.5.0_3.0_1725920964543.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("all_roberta_large_v1_work_1_16_5","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("all_roberta_large_v1_work_1_16_5", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_roberta_large_v1_work_1_16_5| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/fathyshalab/all-roberta-large-v1-work-1-16-5 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-all_roberta_large_v1_work_4_16_5_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-all_roberta_large_v1_work_4_16_5_pipeline_en.md new file mode 100644 index 00000000000000..e0f59876f679ca --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-all_roberta_large_v1_work_4_16_5_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English all_roberta_large_v1_work_4_16_5_pipeline pipeline RoBertaForSequenceClassification from fathyshalab +author: John Snow Labs +name: all_roberta_large_v1_work_4_16_5_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_roberta_large_v1_work_4_16_5_pipeline` is a English model originally trained by fathyshalab. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_roberta_large_v1_work_4_16_5_pipeline_en_5.5.0_3.0_1725904152919.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_roberta_large_v1_work_4_16_5_pipeline_en_5.5.0_3.0_1725904152919.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("all_roberta_large_v1_work_4_16_5_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("all_roberta_large_v1_work_4_16_5_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_roberta_large_v1_work_4_16_5_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/fathyshalab/all-roberta-large-v1-work-4-16-5 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-alvaro_marian_finetuned_italian_arabic_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-alvaro_marian_finetuned_italian_arabic_pipeline_en.md new file mode 100644 index 00000000000000..10139a5fdba4cb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-alvaro_marian_finetuned_italian_arabic_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English alvaro_marian_finetuned_italian_arabic_pipeline pipeline MarianTransformer from Rooshan +author: John Snow Labs +name: alvaro_marian_finetuned_italian_arabic_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`alvaro_marian_finetuned_italian_arabic_pipeline` is a English model originally trained by Rooshan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/alvaro_marian_finetuned_italian_arabic_pipeline_en_5.5.0_3.0_1725913325881.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/alvaro_marian_finetuned_italian_arabic_pipeline_en_5.5.0_3.0_1725913325881.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("alvaro_marian_finetuned_italian_arabic_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("alvaro_marian_finetuned_italian_arabic_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|alvaro_marian_finetuned_italian_arabic_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|531.8 MB| + +## References + +https://huggingface.co/Rooshan/Alvaro-marian_finetuned_it_ar + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-alvaro_marian_finetuned_italian_pb_en.md b/docs/_posts/ahmedlone127/2024-09-09-alvaro_marian_finetuned_italian_pb_en.md new file mode 100644 index 00000000000000..3f7f75af37b36c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-alvaro_marian_finetuned_italian_pb_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English alvaro_marian_finetuned_italian_pb MarianTransformer from Rooshan +author: John Snow Labs +name: alvaro_marian_finetuned_italian_pb +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`alvaro_marian_finetuned_italian_pb` is a English model originally trained by Rooshan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/alvaro_marian_finetuned_italian_pb_en_5.5.0_3.0_1725914294917.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/alvaro_marian_finetuned_italian_pb_en_5.5.0_3.0_1725914294917.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("alvaro_marian_finetuned_italian_pb","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("alvaro_marian_finetuned_italian_pb","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|alvaro_marian_finetuned_italian_pb| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|376.3 MB| + +## References + +https://huggingface.co/Rooshan/Alvaro-marian_finetuned_it_pb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-alvaro_marian_finetuned_italian_pb_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-alvaro_marian_finetuned_italian_pb_pipeline_en.md new file mode 100644 index 00000000000000..febd90e2413384 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-alvaro_marian_finetuned_italian_pb_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English alvaro_marian_finetuned_italian_pb_pipeline pipeline MarianTransformer from Rooshan +author: John Snow Labs +name: alvaro_marian_finetuned_italian_pb_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`alvaro_marian_finetuned_italian_pb_pipeline` is a English model originally trained by Rooshan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/alvaro_marian_finetuned_italian_pb_pipeline_en_5.5.0_3.0_1725914313793.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/alvaro_marian_finetuned_italian_pb_pipeline_en_5.5.0_3.0_1725914313793.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("alvaro_marian_finetuned_italian_pb_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("alvaro_marian_finetuned_italian_pb_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|alvaro_marian_finetuned_italian_pb_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|376.8 MB| + +## References + +https://huggingface.co/Rooshan/Alvaro-marian_finetuned_it_pb + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-angela_diacritics_untranslated_eval_en.md b/docs/_posts/ahmedlone127/2024-09-09-angela_diacritics_untranslated_eval_en.md new file mode 100644 index 00000000000000..1e867a01646919 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-angela_diacritics_untranslated_eval_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English angela_diacritics_untranslated_eval XlmRoBertaForTokenClassification from azhang1212 +author: John Snow Labs +name: angela_diacritics_untranslated_eval +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`angela_diacritics_untranslated_eval` is a English model originally trained by azhang1212. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/angela_diacritics_untranslated_eval_en_5.5.0_3.0_1725922689574.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/angela_diacritics_untranslated_eval_en_5.5.0_3.0_1725922689574.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("angela_diacritics_untranslated_eval","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("angela_diacritics_untranslated_eval", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|angela_diacritics_untranslated_eval| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|1.0 GB| + +## References + +https://huggingface.co/azhang1212/angela_diacritics_untranslated_eval \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-anime_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-anime_pipeline_en.md new file mode 100644 index 00000000000000..e065cfd62a0c44 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-anime_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English anime_pipeline pipeline MPNetEmbeddings from toobi +author: John Snow Labs +name: anime_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`anime_pipeline` is a English model originally trained by toobi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/anime_pipeline_en_5.5.0_3.0_1725896713583.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/anime_pipeline_en_5.5.0_3.0_1725896713583.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("anime_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("anime_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|anime_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|407.2 MB| + +## References + +https://huggingface.co/toobi/anime + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-apologise_for_waiting_bert_first256_en.md b/docs/_posts/ahmedlone127/2024-09-09-apologise_for_waiting_bert_first256_en.md new file mode 100644 index 00000000000000..d8f38552a94a87 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-apologise_for_waiting_bert_first256_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English apologise_for_waiting_bert_first256 BertForSequenceClassification from etadevosyan +author: John Snow Labs +name: apologise_for_waiting_bert_first256 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`apologise_for_waiting_bert_first256` is a English model originally trained by etadevosyan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/apologise_for_waiting_bert_first256_en_5.5.0_3.0_1725900802533.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/apologise_for_waiting_bert_first256_en_5.5.0_3.0_1725900802533.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("apologise_for_waiting_bert_first256","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("apologise_for_waiting_bert_first256", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|apologise_for_waiting_bert_first256| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|666.5 MB| + +## References + +https://huggingface.co/etadevosyan/apologise_for_waiting_bert_First256 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-araroberta_sanskrit_saskta_ar.md b/docs/_posts/ahmedlone127/2024-09-09-araroberta_sanskrit_saskta_ar.md new file mode 100644 index 00000000000000..0689b034138e7c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-araroberta_sanskrit_saskta_ar.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Arabic araroberta_sanskrit_saskta RoBertaEmbeddings from reemalyami +author: John Snow Labs +name: araroberta_sanskrit_saskta +date: 2024-09-09 +tags: [ar, open_source, onnx, embeddings, roberta] +task: Embeddings +language: ar +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`araroberta_sanskrit_saskta` is a Arabic model originally trained by reemalyami. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/araroberta_sanskrit_saskta_ar_5.5.0_3.0_1725883325342.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/araroberta_sanskrit_saskta_ar_5.5.0_3.0_1725883325342.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("araroberta_sanskrit_saskta","ar") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("araroberta_sanskrit_saskta","ar") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|araroberta_sanskrit_saskta| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|ar| +|Size:|471.0 MB| + +## References + +https://huggingface.co/reemalyami/AraRoBERTa-SA \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-args_me_crossencoder_v1_en.md b/docs/_posts/ahmedlone127/2024-09-09-args_me_crossencoder_v1_en.md new file mode 100644 index 00000000000000..2f3d23ac13376f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-args_me_crossencoder_v1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English args_me_crossencoder_v1 RoBertaForSequenceClassification from ragarwal +author: John Snow Labs +name: args_me_crossencoder_v1 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`args_me_crossencoder_v1` is a English model originally trained by ragarwal. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/args_me_crossencoder_v1_en_5.5.0_3.0_1725920566203.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/args_me_crossencoder_v1_en_5.5.0_3.0_1725920566203.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("args_me_crossencoder_v1","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("args_me_crossencoder_v1", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|args_me_crossencoder_v1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/ragarwal/args-me-crossencoder-v1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-argument_classification_ukp_sentence_roberta_en.md b/docs/_posts/ahmedlone127/2024-09-09-argument_classification_ukp_sentence_roberta_en.md new file mode 100644 index 00000000000000..a418883663745e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-argument_classification_ukp_sentence_roberta_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English argument_classification_ukp_sentence_roberta RoBertaForSequenceClassification from anhuu +author: John Snow Labs +name: argument_classification_ukp_sentence_roberta +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`argument_classification_ukp_sentence_roberta` is a English model originally trained by anhuu. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/argument_classification_ukp_sentence_roberta_en_5.5.0_3.0_1725912254407.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/argument_classification_ukp_sentence_roberta_en_5.5.0_3.0_1725912254407.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("argument_classification_ukp_sentence_roberta","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("argument_classification_ukp_sentence_roberta", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|argument_classification_ukp_sentence_roberta| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|437.1 MB| + +## References + +https://huggingface.co/anhuu/argument_classification_UKP_sentence_roberta \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-argument_classification_ukp_sentence_roberta_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-argument_classification_ukp_sentence_roberta_pipeline_en.md new file mode 100644 index 00000000000000..a225f122fee3d5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-argument_classification_ukp_sentence_roberta_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English argument_classification_ukp_sentence_roberta_pipeline pipeline RoBertaForSequenceClassification from anhuu +author: John Snow Labs +name: argument_classification_ukp_sentence_roberta_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`argument_classification_ukp_sentence_roberta_pipeline` is a English model originally trained by anhuu. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/argument_classification_ukp_sentence_roberta_pipeline_en_5.5.0_3.0_1725912292229.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/argument_classification_ukp_sentence_roberta_pipeline_en_5.5.0_3.0_1725912292229.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("argument_classification_ukp_sentence_roberta_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("argument_classification_ukp_sentence_roberta_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|argument_classification_ukp_sentence_roberta_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|437.1 MB| + +## References + +https://huggingface.co/anhuu/argument_classification_UKP_sentence_roberta + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-argureviews_sentiment_deberta_v1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-argureviews_sentiment_deberta_v1_pipeline_en.md new file mode 100644 index 00000000000000..4d9a074154326f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-argureviews_sentiment_deberta_v1_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English argureviews_sentiment_deberta_v1_pipeline pipeline DeBertaForSequenceClassification from nihiluis +author: John Snow Labs +name: argureviews_sentiment_deberta_v1_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`argureviews_sentiment_deberta_v1_pipeline` is a English model originally trained by nihiluis. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/argureviews_sentiment_deberta_v1_pipeline_en_5.5.0_3.0_1725859657517.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/argureviews_sentiment_deberta_v1_pipeline_en_5.5.0_3.0_1725859657517.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("argureviews_sentiment_deberta_v1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("argureviews_sentiment_deberta_v1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|argureviews_sentiment_deberta_v1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.5 GB| + +## References + +https://huggingface.co/nihiluis/argureviews-sentiment-deberta_v1 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-artificially_natural_roberta_jan_2024_en.md b/docs/_posts/ahmedlone127/2024-09-09-artificially_natural_roberta_jan_2024_en.md new file mode 100644 index 00000000000000..a162ce60b2b741 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-artificially_natural_roberta_jan_2024_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English artificially_natural_roberta_jan_2024 RoBertaForSequenceClassification from ConnyGenz +author: John Snow Labs +name: artificially_natural_roberta_jan_2024 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`artificially_natural_roberta_jan_2024` is a English model originally trained by ConnyGenz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/artificially_natural_roberta_jan_2024_en_5.5.0_3.0_1725912418139.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/artificially_natural_roberta_jan_2024_en_5.5.0_3.0_1725912418139.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("artificially_natural_roberta_jan_2024","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("artificially_natural_roberta_jan_2024", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|artificially_natural_roberta_jan_2024| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.4 MB| + +## References + +https://huggingface.co/ConnyGenz/artificially-natural-roberta-Jan-2024 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-artificially_natural_roberta_jan_2024_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-artificially_natural_roberta_jan_2024_pipeline_en.md new file mode 100644 index 00000000000000..7ec54b8c7cb856 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-artificially_natural_roberta_jan_2024_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English artificially_natural_roberta_jan_2024_pipeline pipeline RoBertaForSequenceClassification from ConnyGenz +author: John Snow Labs +name: artificially_natural_roberta_jan_2024_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`artificially_natural_roberta_jan_2024_pipeline` is a English model originally trained by ConnyGenz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/artificially_natural_roberta_jan_2024_pipeline_en_5.5.0_3.0_1725912441904.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/artificially_natural_roberta_jan_2024_pipeline_en_5.5.0_3.0_1725912441904.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("artificially_natural_roberta_jan_2024_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("artificially_natural_roberta_jan_2024_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|artificially_natural_roberta_jan_2024_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/ConnyGenz/artificially-natural-roberta-Jan-2024 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-augment_tweet_bert_large_e4_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-augment_tweet_bert_large_e4_pipeline_en.md new file mode 100644 index 00000000000000..6e7b7b353417c8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-augment_tweet_bert_large_e4_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English augment_tweet_bert_large_e4_pipeline pipeline RoBertaForSequenceClassification from JerryYanJiang +author: John Snow Labs +name: augment_tweet_bert_large_e4_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`augment_tweet_bert_large_e4_pipeline` is a English model originally trained by JerryYanJiang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/augment_tweet_bert_large_e4_pipeline_en_5.5.0_3.0_1725911440638.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/augment_tweet_bert_large_e4_pipeline_en_5.5.0_3.0_1725911440638.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("augment_tweet_bert_large_e4_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("augment_tweet_bert_large_e4_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|augment_tweet_bert_large_e4_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/JerryYanJiang/augment-tweet-bert-large-e4 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-autotrain_imdb_1166543179_en.md b/docs/_posts/ahmedlone127/2024-09-09-autotrain_imdb_1166543179_en.md new file mode 100644 index 00000000000000..58f0e43ff56a9f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-autotrain_imdb_1166543179_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English autotrain_imdb_1166543179 RoBertaForSequenceClassification from ameerazam08 +author: John Snow Labs +name: autotrain_imdb_1166543179 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`autotrain_imdb_1166543179` is a English model originally trained by ameerazam08. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/autotrain_imdb_1166543179_en_5.5.0_3.0_1725904655867.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/autotrain_imdb_1166543179_en_5.5.0_3.0_1725904655867.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("autotrain_imdb_1166543179","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("autotrain_imdb_1166543179", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|autotrain_imdb_1166543179| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/ameerazam08/autotrain-imdb-1166543179 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-autotrain_imdb_1166543179_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-autotrain_imdb_1166543179_pipeline_en.md new file mode 100644 index 00000000000000..cc03ced7b73943 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-autotrain_imdb_1166543179_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English autotrain_imdb_1166543179_pipeline pipeline RoBertaForSequenceClassification from ameerazam08 +author: John Snow Labs +name: autotrain_imdb_1166543179_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`autotrain_imdb_1166543179_pipeline` is a English model originally trained by ameerazam08. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/autotrain_imdb_1166543179_pipeline_en_5.5.0_3.0_1725904722578.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/autotrain_imdb_1166543179_pipeline_en_5.5.0_3.0_1725904722578.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("autotrain_imdb_1166543179_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("autotrain_imdb_1166543179_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|autotrain_imdb_1166543179_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/ameerazam08/autotrain-imdb-1166543179 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-autotrain_ve993_lub6e_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-autotrain_ve993_lub6e_pipeline_en.md new file mode 100644 index 00000000000000..142d17a0b1ad03 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-autotrain_ve993_lub6e_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English autotrain_ve993_lub6e_pipeline pipeline MarianTransformer from LRJ1981 +author: John Snow Labs +name: autotrain_ve993_lub6e_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`autotrain_ve993_lub6e_pipeline` is a English model originally trained by LRJ1981. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/autotrain_ve993_lub6e_pipeline_en_5.5.0_3.0_1725914039377.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/autotrain_ve993_lub6e_pipeline_en_5.5.0_3.0_1725914039377.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("autotrain_ve993_lub6e_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("autotrain_ve993_lub6e_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|autotrain_ve993_lub6e_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|505.2 MB| + +## References + +https://huggingface.co/LRJ1981/autotrain-ve993-lub6e + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-autotrain_xlm_roberta_base_reviews_672119799_en.md b/docs/_posts/ahmedlone127/2024-09-09-autotrain_xlm_roberta_base_reviews_672119799_en.md new file mode 100644 index 00000000000000..937ea184dc0a7f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-autotrain_xlm_roberta_base_reviews_672119799_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English autotrain_xlm_roberta_base_reviews_672119799 XlmRoBertaForSequenceClassification from YXHugging +author: John Snow Labs +name: autotrain_xlm_roberta_base_reviews_672119799 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`autotrain_xlm_roberta_base_reviews_672119799` is a English model originally trained by YXHugging. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/autotrain_xlm_roberta_base_reviews_672119799_en_5.5.0_3.0_1725907859800.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/autotrain_xlm_roberta_base_reviews_672119799_en_5.5.0_3.0_1725907859800.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("autotrain_xlm_roberta_base_reviews_672119799","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("autotrain_xlm_roberta_base_reviews_672119799", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|autotrain_xlm_roberta_base_reviews_672119799| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|858.1 MB| + +## References + +https://huggingface.co/YXHugging/autotrain-xlm-roberta-base-reviews-672119799 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-autotrain_xlm_roberta_base_reviews_672119799_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-autotrain_xlm_roberta_base_reviews_672119799_pipeline_en.md new file mode 100644 index 00000000000000..07ff4d3c9b9b22 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-autotrain_xlm_roberta_base_reviews_672119799_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English autotrain_xlm_roberta_base_reviews_672119799_pipeline pipeline XlmRoBertaForSequenceClassification from YXHugging +author: John Snow Labs +name: autotrain_xlm_roberta_base_reviews_672119799_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`autotrain_xlm_roberta_base_reviews_672119799_pipeline` is a English model originally trained by YXHugging. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/autotrain_xlm_roberta_base_reviews_672119799_pipeline_en_5.5.0_3.0_1725907967603.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/autotrain_xlm_roberta_base_reviews_672119799_pipeline_en_5.5.0_3.0_1725907967603.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("autotrain_xlm_roberta_base_reviews_672119799_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("autotrain_xlm_roberta_base_reviews_672119799_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|autotrain_xlm_roberta_base_reviews_672119799_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|858.1 MB| + +## References + +https://huggingface.co/YXHugging/autotrain-xlm-roberta-base-reviews-672119799 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-babyberta_aochildes_2_5m_aochildes_french_without_masking_seed3_finetuned_squad_en.md b/docs/_posts/ahmedlone127/2024-09-09-babyberta_aochildes_2_5m_aochildes_french_without_masking_seed3_finetuned_squad_en.md new file mode 100644 index 00000000000000..b4011f71ad0743 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-babyberta_aochildes_2_5m_aochildes_french_without_masking_seed3_finetuned_squad_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English babyberta_aochildes_2_5m_aochildes_french_without_masking_seed3_finetuned_squad RoBertaForQuestionAnswering from lielbin +author: John Snow Labs +name: babyberta_aochildes_2_5m_aochildes_french_without_masking_seed3_finetuned_squad +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`babyberta_aochildes_2_5m_aochildes_french_without_masking_seed3_finetuned_squad` is a English model originally trained by lielbin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/babyberta_aochildes_2_5m_aochildes_french_without_masking_seed3_finetuned_squad_en_5.5.0_3.0_1725866585925.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/babyberta_aochildes_2_5m_aochildes_french_without_masking_seed3_finetuned_squad_en_5.5.0_3.0_1725866585925.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("babyberta_aochildes_2_5m_aochildes_french_without_masking_seed3_finetuned_squad","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("babyberta_aochildes_2_5m_aochildes_french_without_masking_seed3_finetuned_squad", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|babyberta_aochildes_2_5m_aochildes_french_without_masking_seed3_finetuned_squad| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|32.0 MB| + +## References + +https://huggingface.co/lielbin/BabyBERTa-aochildes_2.5M_aochildes-french-without-Masking-seed3-finetuned-SQuAD \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_en.md b/docs/_posts/ahmedlone127/2024-09-09-babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_en.md new file mode 100644 index 00000000000000..d7dcdc18fe1d0b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad RoBertaForQuestionAnswering from lielbin +author: John Snow Labs +name: babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad` is a English model originally trained by lielbin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_en_5.5.0_3.0_1725875814393.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_en_5.5.0_3.0_1725875814393.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|32.0 MB| + +## References + +https://huggingface.co/lielbin/BabyBERTa-aochildes-french_aochildes_2.5M-without-Masking-seed6-finetuned-SQuAD \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_pipeline_en.md new file mode 100644 index 00000000000000..121bbfc224270b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_pipeline pipeline RoBertaForQuestionAnswering from lielbin +author: John Snow Labs +name: babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_pipeline` is a English model originally trained by lielbin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_pipeline_en_5.5.0_3.0_1725875816429.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_pipeline_en_5.5.0_3.0_1725875816429.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|babyberta_aochildes_french_aochildes_2_5m_without_masking_seed6_finetuned_squad_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|32.0 MB| + +## References + +https://huggingface.co/lielbin/BabyBERTa-aochildes-french_aochildes_2.5M-without-Masking-seed6-finetuned-SQuAD + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-babyberta_childes_2_5_0_1_finetuned_qasrl_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-babyberta_childes_2_5_0_1_finetuned_qasrl_pipeline_en.md new file mode 100644 index 00000000000000..f279e4c6c5597a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-babyberta_childes_2_5_0_1_finetuned_qasrl_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English babyberta_childes_2_5_0_1_finetuned_qasrl_pipeline pipeline RoBertaForQuestionAnswering from lielbin +author: John Snow Labs +name: babyberta_childes_2_5_0_1_finetuned_qasrl_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`babyberta_childes_2_5_0_1_finetuned_qasrl_pipeline` is a English model originally trained by lielbin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/babyberta_childes_2_5_0_1_finetuned_qasrl_pipeline_en_5.5.0_3.0_1725876114604.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/babyberta_childes_2_5_0_1_finetuned_qasrl_pipeline_en_5.5.0_3.0_1725876114604.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("babyberta_childes_2_5_0_1_finetuned_qasrl_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("babyberta_childes_2_5_0_1_finetuned_qasrl_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|babyberta_childes_2_5_0_1_finetuned_qasrl_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|32.0 MB| + +## References + +https://huggingface.co/lielbin/babyberta-CHILDES_2.5-0.1-finetuned-QASRL + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-babyberta_wikipedia1_1_25m_wikipedia_french1_25m_without_masking_seed3_finetuned_squad_en.md b/docs/_posts/ahmedlone127/2024-09-09-babyberta_wikipedia1_1_25m_wikipedia_french1_25m_without_masking_seed3_finetuned_squad_en.md new file mode 100644 index 00000000000000..cf01def33e7d5a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-babyberta_wikipedia1_1_25m_wikipedia_french1_25m_without_masking_seed3_finetuned_squad_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English babyberta_wikipedia1_1_25m_wikipedia_french1_25m_without_masking_seed3_finetuned_squad RoBertaForQuestionAnswering from lielbin +author: John Snow Labs +name: babyberta_wikipedia1_1_25m_wikipedia_french1_25m_without_masking_seed3_finetuned_squad +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`babyberta_wikipedia1_1_25m_wikipedia_french1_25m_without_masking_seed3_finetuned_squad` is a English model originally trained by lielbin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/babyberta_wikipedia1_1_25m_wikipedia_french1_25m_without_masking_seed3_finetuned_squad_en_5.5.0_3.0_1725876185536.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/babyberta_wikipedia1_1_25m_wikipedia_french1_25m_without_masking_seed3_finetuned_squad_en_5.5.0_3.0_1725876185536.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("babyberta_wikipedia1_1_25m_wikipedia_french1_25m_without_masking_seed3_finetuned_squad","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("babyberta_wikipedia1_1_25m_wikipedia_french1_25m_without_masking_seed3_finetuned_squad", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|babyberta_wikipedia1_1_25m_wikipedia_french1_25m_without_masking_seed3_finetuned_squad| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|31.9 MB| + +## References + +https://huggingface.co/lielbin/BabyBERTa-wikipedia1_1.25M_wikipedia_french1.25M-without-Masking-seed3-finetuned-SQuAD \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-babyberta_wikipedia_french_aochildes_french_without_masking_seed6_finetuned_squad_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-babyberta_wikipedia_french_aochildes_french_without_masking_seed6_finetuned_squad_pipeline_en.md new file mode 100644 index 00000000000000..e68f1ae21805bf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-babyberta_wikipedia_french_aochildes_french_without_masking_seed6_finetuned_squad_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English babyberta_wikipedia_french_aochildes_french_without_masking_seed6_finetuned_squad_pipeline pipeline RoBertaForQuestionAnswering from lielbin +author: John Snow Labs +name: babyberta_wikipedia_french_aochildes_french_without_masking_seed6_finetuned_squad_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`babyberta_wikipedia_french_aochildes_french_without_masking_seed6_finetuned_squad_pipeline` is a English model originally trained by lielbin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/babyberta_wikipedia_french_aochildes_french_without_masking_seed6_finetuned_squad_pipeline_en_5.5.0_3.0_1725876468215.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/babyberta_wikipedia_french_aochildes_french_without_masking_seed6_finetuned_squad_pipeline_en_5.5.0_3.0_1725876468215.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("babyberta_wikipedia_french_aochildes_french_without_masking_seed6_finetuned_squad_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("babyberta_wikipedia_french_aochildes_french_without_masking_seed6_finetuned_squad_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|babyberta_wikipedia_french_aochildes_french_without_masking_seed6_finetuned_squad_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|32.0 MB| + +## References + +https://huggingface.co/lielbin/BabyBERTa-wikipedia_french_aochildes-french-without-Masking-seed6-finetuned-SQuAD + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-babyberta_wikipedia_french_run3_with_masking_finetuned_french_squad_en.md b/docs/_posts/ahmedlone127/2024-09-09-babyberta_wikipedia_french_run3_with_masking_finetuned_french_squad_en.md new file mode 100644 index 00000000000000..70a7448da8f572 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-babyberta_wikipedia_french_run3_with_masking_finetuned_french_squad_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English babyberta_wikipedia_french_run3_with_masking_finetuned_french_squad RoBertaForQuestionAnswering from lielbin +author: John Snow Labs +name: babyberta_wikipedia_french_run3_with_masking_finetuned_french_squad +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`babyberta_wikipedia_french_run3_with_masking_finetuned_french_squad` is a English model originally trained by lielbin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/babyberta_wikipedia_french_run3_with_masking_finetuned_french_squad_en_5.5.0_3.0_1725876177707.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/babyberta_wikipedia_french_run3_with_masking_finetuned_french_squad_en_5.5.0_3.0_1725876177707.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("babyberta_wikipedia_french_run3_with_masking_finetuned_french_squad","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("babyberta_wikipedia_french_run3_with_masking_finetuned_french_squad", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|babyberta_wikipedia_french_run3_with_masking_finetuned_french_squad| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|32.0 MB| + +## References + +https://huggingface.co/lielbin/BabyBERTa-wikipedia_french-run3-with-Masking-finetuned-french-SQuAD \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-banking77_en.md b/docs/_posts/ahmedlone127/2024-09-09-banking77_en.md new file mode 100644 index 00000000000000..9e8017126da32c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-banking77_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English banking77 DistilBertEmbeddings from IreNkweke +author: John Snow Labs +name: banking77 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`banking77` is a English model originally trained by IreNkweke. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/banking77_en_5.5.0_3.0_1725909224044.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/banking77_en_5.5.0_3.0_1725909224044.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("banking77","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("banking77","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|banking77| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/IreNkweke/banking77 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-banking77_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-banking77_pipeline_en.md new file mode 100644 index 00000000000000..b0b06322c3cc79 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-banking77_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English banking77_pipeline pipeline DistilBertEmbeddings from IreNkweke +author: John Snow Labs +name: banking77_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`banking77_pipeline` is a English model originally trained by IreNkweke. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/banking77_pipeline_en_5.5.0_3.0_1725909235902.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/banking77_pipeline_en_5.5.0_3.0_1725909235902.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("banking77_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("banking77_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|banking77_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/IreNkweke/banking77 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bent_pubmedbert_ner_cell_type_en.md b/docs/_posts/ahmedlone127/2024-09-09-bent_pubmedbert_ner_cell_type_en.md new file mode 100644 index 00000000000000..0618b0241ccf81 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bent_pubmedbert_ner_cell_type_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bent_pubmedbert_ner_cell_type BertForTokenClassification from pruas +author: John Snow Labs +name: bent_pubmedbert_ner_cell_type +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, bert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bent_pubmedbert_ner_cell_type` is a English model originally trained by pruas. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bent_pubmedbert_ner_cell_type_en_5.5.0_3.0_1725886948121.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bent_pubmedbert_ner_cell_type_en_5.5.0_3.0_1725886948121.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = BertForTokenClassification.pretrained("bent_pubmedbert_ner_cell_type","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = BertForTokenClassification.pretrained("bent_pubmedbert_ner_cell_type", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bent_pubmedbert_ner_cell_type| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|408.1 MB| + +## References + +https://huggingface.co/pruas/BENT-PubMedBERT-NER-Cell-Type \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_agnews_en.md b/docs/_posts/ahmedlone127/2024-09-09-bert_agnews_en.md new file mode 100644 index 00000000000000..bcd08a0ce70b39 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_agnews_en.md @@ -0,0 +1,98 @@ +--- +layout: model +title: English bert_agnews BertForSequenceClassification from tzhao3 +author: John Snow Labs +name: bert_agnews +date: 2024-09-09 +tags: [bert, en, open_source, sequence_classification, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_agnews` is a English model originally trained by tzhao3. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_agnews_en_5.5.0_3.0_1725900219589.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_agnews_en_5.5.0_3.0_1725900219589.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +document_assembler = DocumentAssembler()\ + .setInputCol("text")\ + .setOutputCol("document") + +tokenizer = Tokenizer()\ + .setInputCols("document")\ + .setOutputCol("token") + +sequenceClassifier = BertForSequenceClassification.pretrained("bert_agnews","en")\ + .setInputCols(["document","token"])\ + .setOutputCol("class") + +pipeline = Pipeline().setStages([document_assembler, tokenizer, sequenceClassifier]) + +data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val document_assembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("bert_agnews","en") + .setInputCols(Array("document","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) + +val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text") + +val result = pipeline.fit(data).transform(data) +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_agnews| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.4 MB| + +## References + +References + +https://huggingface.co/tzhao3/Bert-AGnews \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_base_cased_pubmedqamodel_en.md b/docs/_posts/ahmedlone127/2024-09-09-bert_base_cased_pubmedqamodel_en.md new file mode 100644 index 00000000000000..35dbe9aef7a653 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_base_cased_pubmedqamodel_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English bert_base_cased_pubmedqamodel BertForQuestionAnswering from pythonist +author: John Snow Labs +name: bert_base_cased_pubmedqamodel +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, bert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_cased_pubmedqamodel` is a English model originally trained by pythonist. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_cased_pubmedqamodel_en_5.5.0_3.0_1725886265782.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_cased_pubmedqamodel_en_5.5.0_3.0_1725886265782.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = BertForQuestionAnswering.pretrained("bert_base_cased_pubmedqamodel","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = BertForQuestionAnswering.pretrained("bert_base_cased_pubmedqamodel", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_cased_pubmedqamodel| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|403.7 MB| + +## References + +https://huggingface.co/pythonist/bert-base-cased-PubmedQAmodel \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_base_uncased_newscategoryclassification_fullmodel_5_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-bert_base_uncased_newscategoryclassification_fullmodel_5_pipeline_en.md new file mode 100644 index 00000000000000..491fee3fa94772 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_base_uncased_newscategoryclassification_fullmodel_5_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bert_base_uncased_newscategoryclassification_fullmodel_5_pipeline pipeline DistilBertForSequenceClassification from akashmaggon +author: John Snow Labs +name: bert_base_uncased_newscategoryclassification_fullmodel_5_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_uncased_newscategoryclassification_fullmodel_5_pipeline` is a English model originally trained by akashmaggon. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_uncased_newscategoryclassification_fullmodel_5_pipeline_en_5.5.0_3.0_1725872887979.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_uncased_newscategoryclassification_fullmodel_5_pipeline_en_5.5.0_3.0_1725872887979.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_base_uncased_newscategoryclassification_fullmodel_5_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_base_uncased_newscategoryclassification_fullmodel_5_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_uncased_newscategoryclassification_fullmodel_5_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.6 MB| + +## References + +https://huggingface.co/akashmaggon/bert-base-uncased-newscategoryclassification-fullmodel-5 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_cased_profanity_en.md b/docs/_posts/ahmedlone127/2024-09-09-bert_cased_profanity_en.md new file mode 100644 index 00000000000000..b5c900c79a8448 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_cased_profanity_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bert_cased_profanity BertForSequenceClassification from SenswiseData +author: John Snow Labs +name: bert_cased_profanity +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_cased_profanity` is a English model originally trained by SenswiseData. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_cased_profanity_en_5.5.0_3.0_1725853094996.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_cased_profanity_en_5.5.0_3.0_1725853094996.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("bert_cased_profanity","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("bert_cased_profanity", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_cased_profanity| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|414.5 MB| + +## References + +https://huggingface.co/SenswiseData/bert_cased_profanity \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_english_news_classification_headlines_pipeline_xx.md b/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_english_news_classification_headlines_pipeline_xx.md new file mode 100644 index 00000000000000..1914532d8c4107 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_english_news_classification_headlines_pipeline_xx.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Multilingual bert_classifier_english_news_classification_headlines_pipeline pipeline BertForSequenceClassification from M47Labs +author: John Snow Labs +name: bert_classifier_english_news_classification_headlines_pipeline +date: 2024-09-09 +tags: [xx, open_source, pipeline, onnx] +task: Text Classification +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_classifier_english_news_classification_headlines_pipeline` is a Multilingual model originally trained by M47Labs. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_classifier_english_news_classification_headlines_pipeline_xx_5.5.0_3.0_1725900285573.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_classifier_english_news_classification_headlines_pipeline_xx_5.5.0_3.0_1725900285573.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_classifier_english_news_classification_headlines_pipeline", lang = "xx") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_classifier_english_news_classification_headlines_pipeline", lang = "xx") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_classifier_english_news_classification_headlines_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|xx| +|Size:|409.5 MB| + +## References + +https://huggingface.co/M47Labs/english_news_classification_headlines + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_english_news_classification_headlines_xx.md b/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_english_news_classification_headlines_xx.md new file mode 100644 index 00000000000000..7c229dd63e6235 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_english_news_classification_headlines_xx.md @@ -0,0 +1,104 @@ +--- +layout: model +title: Multilingual BertForSequenceClassification Cased model (from M47Labs) +author: John Snow Labs +name: bert_classifier_english_news_classification_headlines +date: 2024-09-09 +tags: [distilbert, sequence_classification, open_source, it, en, xx, onnx] +task: Text Classification +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `english_news_classification_headlines` is a Multilingual model originally trained by `M47Labs`. + +## Predicted Entities + +`religion and belief`, `science and technology`, `health`, `society`, `weather`, `enviroment`, `sport`, `politics`, `education`, `lifestyle and leisure`, `labour` + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_classifier_english_news_classification_headlines_xx_5.5.0_3.0_1725900265885.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_classifier_english_news_classification_headlines_xx_5.5.0_3.0_1725900265885.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +sequenceClassifier_loaded = BertForSequenceClassification.pretrained("bert_classifier_english_news_classification_headlines","xx") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("class") + +pipeline = Pipeline(stages=[documentAssembler, tokenizer,sequenceClassifier_loaded]) + +data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier_loaded = BertForSequenceClassification.pretrained("bert_classifier_english_news_classification_headlines","xx") + .setInputCols(Array("document", "token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer,sequenceClassifier_loaded)) + +val data = Seq("PUT YOUR STRING HERE").toDF("text") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("xx.classify.bert.news.").predict("""PUT YOUR STRING HERE""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_classifier_english_news_classification_headlines| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|xx| +|Size:|409.4 MB| + +## References + +References + +- https://huggingface.co/M47Labs/english_news_classification_headlines \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_norwegian_bokml_base_user_needs_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_norwegian_bokml_base_user_needs_pipeline_en.md new file mode 100644 index 00000000000000..5a02040d4842c0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_norwegian_bokml_base_user_needs_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bert_classifier_norwegian_bokml_base_user_needs_pipeline pipeline BertForSequenceClassification from thusken +author: John Snow Labs +name: bert_classifier_norwegian_bokml_base_user_needs_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_classifier_norwegian_bokml_base_user_needs_pipeline` is a English model originally trained by thusken. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_classifier_norwegian_bokml_base_user_needs_pipeline_en_5.5.0_3.0_1725900447364.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_classifier_norwegian_bokml_base_user_needs_pipeline_en_5.5.0_3.0_1725900447364.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_classifier_norwegian_bokml_base_user_needs_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_classifier_norwegian_bokml_base_user_needs_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_classifier_norwegian_bokml_base_user_needs_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|668.5 MB| + +## References + +https://huggingface.co/thusken/nb-bert-base-user-needs + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_norwegian_bokml_large_user_needs_nb.md b/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_norwegian_bokml_large_user_needs_nb.md new file mode 100644 index 00000000000000..d3412fb9cd628e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_norwegian_bokml_large_user_needs_nb.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Norwegian Bokmål bert_classifier_norwegian_bokml_large_user_needs BertForSequenceClassification from thusken +author: John Snow Labs +name: bert_classifier_norwegian_bokml_large_user_needs +date: 2024-09-09 +tags: [nb, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: nb +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_classifier_norwegian_bokml_large_user_needs` is a Norwegian Bokmål model originally trained by thusken. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_classifier_norwegian_bokml_large_user_needs_nb_5.5.0_3.0_1725900296193.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_classifier_norwegian_bokml_large_user_needs_nb_5.5.0_3.0_1725900296193.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("bert_classifier_norwegian_bokml_large_user_needs","nb") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("bert_classifier_norwegian_bokml_large_user_needs", "nb") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_classifier_norwegian_bokml_large_user_needs| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|nb| +|Size:|1.3 GB| + +## References + +https://huggingface.co/thusken/nb-bert-large-user-needs \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_sb_en.md b/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_sb_en.md new file mode 100644 index 00000000000000..b1af6002b63f4c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_sb_en.md @@ -0,0 +1,104 @@ +--- +layout: model +title: English BertForSequenceClassification Cased model (from EColi) +author: John Snow Labs +name: bert_classifier_sb +date: 2024-09-09 +tags: [bert, sequence_classification, classification, open_source, en, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `SB_Classifier` is a English model originally trained by `EColi`. + +## Predicted Entities + +`SELFPROMO`, `SPONSOR`, `NONE`, `INTERACTION` + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_classifier_sb_en_5.5.0_3.0_1725900942582.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_classifier_sb_en_5.5.0_3.0_1725900942582.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +sequenceClassifier_loaded = BertForSequenceClassification.pretrained("bert_classifier_sb","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("class") + +pipeline = Pipeline(stages=[documentAssembler, tokenizer,sequenceClassifier_loaded]) + +data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier_loaded = BertForSequenceClassification.pretrained("bert_classifier_sb","en") + .setInputCols(Array("document", "token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer,sequenceClassifier_loaded)) + +val data = Seq("PUT YOUR STRING HERE").toDF("text") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("en.classify.bert.by_ecoli").predict("""PUT YOUR STRING HERE""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_classifier_sb| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.4 MB| + +## References + +References + +- https://huggingface.co/EColi/SB_Classifier \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_sead_l_6_h_384_a_12_qqp_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_sead_l_6_h_384_a_12_qqp_pipeline_en.md new file mode 100644 index 00000000000000..6c5c68b6db6753 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_classifier_sead_l_6_h_384_a_12_qqp_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bert_classifier_sead_l_6_h_384_a_12_qqp_pipeline pipeline BertForSequenceClassification from course5i +author: John Snow Labs +name: bert_classifier_sead_l_6_h_384_a_12_qqp_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_classifier_sead_l_6_h_384_a_12_qqp_pipeline` is a English model originally trained by course5i. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_classifier_sead_l_6_h_384_a_12_qqp_pipeline_en_5.5.0_3.0_1725899916384.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_classifier_sead_l_6_h_384_a_12_qqp_pipeline_en_5.5.0_3.0_1725899916384.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_classifier_sead_l_6_h_384_a_12_qqp_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_classifier_sead_l_6_h_384_a_12_qqp_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_classifier_sead_l_6_h_384_a_12_qqp_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|84.4 MB| + +## References + +https://huggingface.co/course5i/SEAD-L-6_H-384_A-12-qqp + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_finetuned_phishing_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-bert_finetuned_phishing_pipeline_en.md new file mode 100644 index 00000000000000..3d86b7e09c5207 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_finetuned_phishing_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bert_finetuned_phishing_pipeline pipeline BertForSequenceClassification from ealvaradob +author: John Snow Labs +name: bert_finetuned_phishing_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_finetuned_phishing_pipeline` is a English model originally trained by ealvaradob. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_finetuned_phishing_pipeline_en_5.5.0_3.0_1725900552120.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_finetuned_phishing_pipeline_en_5.5.0_3.0_1725900552120.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_finetuned_phishing_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_finetuned_phishing_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_finetuned_phishing_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/ealvaradob/bert-finetuned-phishing + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_leg_al_perplexity_en.md b/docs/_posts/ahmedlone127/2024-09-09-bert_leg_al_perplexity_en.md new file mode 100644 index 00000000000000..6429ab0de8baee --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_leg_al_perplexity_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bert_leg_al_perplexity RoBertaEmbeddings from desarrolloasesoreslocales +author: John Snow Labs +name: bert_leg_al_perplexity +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_leg_al_perplexity` is a English model originally trained by desarrolloasesoreslocales. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_leg_al_perplexity_en_5.5.0_3.0_1725910403352.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_leg_al_perplexity_en_5.5.0_3.0_1725910403352.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("bert_leg_al_perplexity","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("bert_leg_al_perplexity","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_leg_al_perplexity| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/desarrolloasesoreslocales/bert-leg-al-perplexity \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_leg_al_perplexity_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-bert_leg_al_perplexity_pipeline_en.md new file mode 100644 index 00000000000000..9995fc7506c9ed --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_leg_al_perplexity_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bert_leg_al_perplexity_pipeline pipeline RoBertaEmbeddings from desarrolloasesoreslocales +author: John Snow Labs +name: bert_leg_al_perplexity_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_leg_al_perplexity_pipeline` is a English model originally trained by desarrolloasesoreslocales. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_leg_al_perplexity_pipeline_en_5.5.0_3.0_1725910427146.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_leg_al_perplexity_pipeline_en_5.5.0_3.0_1725910427146.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_leg_al_perplexity_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_leg_al_perplexity_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_leg_al_perplexity_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/desarrolloasesoreslocales/bert-leg-al-perplexity + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_moral_emotion_kor_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-bert_moral_emotion_kor_pipeline_en.md new file mode 100644 index 00000000000000..abcd3a5a2f8535 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_moral_emotion_kor_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bert_moral_emotion_kor_pipeline pipeline BertForSequenceClassification from Chaeyoon +author: John Snow Labs +name: bert_moral_emotion_kor_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_moral_emotion_kor_pipeline` is a English model originally trained by Chaeyoon. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_moral_emotion_kor_pipeline_en_5.5.0_3.0_1725856570073.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_moral_emotion_kor_pipeline_en_5.5.0_3.0_1725856570073.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_moral_emotion_kor_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_moral_emotion_kor_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_moral_emotion_kor_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|408.5 MB| + +## References + +https://huggingface.co/Chaeyoon/BERT-Moral-Emotion-KOR + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_southern_sotho_qa_multi_qa_mpnet_base_cos_v1_epochs_10_en.md b/docs/_posts/ahmedlone127/2024-09-09-bert_southern_sotho_qa_multi_qa_mpnet_base_cos_v1_epochs_10_en.md new file mode 100644 index 00000000000000..d8bf518a87b380 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_southern_sotho_qa_multi_qa_mpnet_base_cos_v1_epochs_10_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English bert_southern_sotho_qa_multi_qa_mpnet_base_cos_v1_epochs_10 MPNetEmbeddings from abhijitt +author: John Snow Labs +name: bert_southern_sotho_qa_multi_qa_mpnet_base_cos_v1_epochs_10 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_southern_sotho_qa_multi_qa_mpnet_base_cos_v1_epochs_10` is a English model originally trained by abhijitt. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_southern_sotho_qa_multi_qa_mpnet_base_cos_v1_epochs_10_en_5.5.0_3.0_1725874241694.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_southern_sotho_qa_multi_qa_mpnet_base_cos_v1_epochs_10_en_5.5.0_3.0_1725874241694.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("bert_southern_sotho_qa_multi_qa_mpnet_base_cos_v1_epochs_10","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("bert_southern_sotho_qa_multi_qa_mpnet_base_cos_v1_epochs_10","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_southern_sotho_qa_multi_qa_mpnet_base_cos_v1_epochs_10| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.0 MB| + +## References + +https://huggingface.co/abhijitt/bert_st_qa_multi-qa-mpnet-base-cos-v1-epochs-10 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bert_turkish_sentiment_analysis_cased_tr.md b/docs/_posts/ahmedlone127/2024-09-09-bert_turkish_sentiment_analysis_cased_tr.md new file mode 100644 index 00000000000000..32690511c35515 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bert_turkish_sentiment_analysis_cased_tr.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Turkish bert_turkish_sentiment_analysis_cased BertForSequenceClassification from Gorengoz +author: John Snow Labs +name: bert_turkish_sentiment_analysis_cased +date: 2024-09-09 +tags: [tr, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: tr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_turkish_sentiment_analysis_cased` is a Turkish model originally trained by Gorengoz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_turkish_sentiment_analysis_cased_tr_5.5.0_3.0_1725900853967.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_turkish_sentiment_analysis_cased_tr_5.5.0_3.0_1725900853967.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("bert_turkish_sentiment_analysis_cased","tr") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("bert_turkish_sentiment_analysis_cased", "tr") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_turkish_sentiment_analysis_cased| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|tr| +|Size:|691.2 MB| + +## References + +https://huggingface.co/Gorengoz/bert-turkish-sentiment-analysis-cased \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bertimbau_finetuned_en.md b/docs/_posts/ahmedlone127/2024-09-09-bertimbau_finetuned_en.md new file mode 100644 index 00000000000000..0b5d6cd21d574f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bertimbau_finetuned_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bertimbau_finetuned BertForSequenceClassification from Horusprg +author: John Snow Labs +name: bertimbau_finetuned +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bertimbau_finetuned` is a English model originally trained by Horusprg. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bertimbau_finetuned_en_5.5.0_3.0_1725900009217.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bertimbau_finetuned_en_5.5.0_3.0_1725900009217.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("bertimbau_finetuned","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("bertimbau_finetuned", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bertimbau_finetuned| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|408.2 MB| + +## References + +https://huggingface.co/Horusprg/bertimbau-finetuned \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bertin_base_gaussian_exp_512seqlen_es.md b/docs/_posts/ahmedlone127/2024-09-09-bertin_base_gaussian_exp_512seqlen_es.md new file mode 100644 index 00000000000000..f81c3ec668f739 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bertin_base_gaussian_exp_512seqlen_es.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Castilian, Spanish bertin_base_gaussian_exp_512seqlen RoBertaEmbeddings from bertin-project +author: John Snow Labs +name: bertin_base_gaussian_exp_512seqlen +date: 2024-09-09 +tags: [es, open_source, onnx, embeddings, roberta] +task: Embeddings +language: es +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bertin_base_gaussian_exp_512seqlen` is a Castilian, Spanish model originally trained by bertin-project. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bertin_base_gaussian_exp_512seqlen_es_5.5.0_3.0_1725910762343.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bertin_base_gaussian_exp_512seqlen_es_5.5.0_3.0_1725910762343.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("bertin_base_gaussian_exp_512seqlen","es") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("bertin_base_gaussian_exp_512seqlen","es") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bertin_base_gaussian_exp_512seqlen| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|es| +|Size:|232.0 MB| + +## References + +https://huggingface.co/bertin-project/bertin-base-gaussian-exp-512seqlen \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bertovosentneg2_en.md b/docs/_posts/ahmedlone127/2024-09-09-bertovosentneg2_en.md new file mode 100644 index 00000000000000..1d01448c3bcf31 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bertovosentneg2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bertovosentneg2 RoBertaForSequenceClassification from Tanor +author: John Snow Labs +name: bertovosentneg2 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bertovosentneg2` is a English model originally trained by Tanor. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bertovosentneg2_en_5.5.0_3.0_1725920082687.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bertovosentneg2_en_5.5.0_3.0_1725920082687.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("bertovosentneg2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("bertovosentneg2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bertovosentneg2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|533.0 MB| + +## References + +https://huggingface.co/Tanor/BERTovoSENTNEG2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bge_base_financial_matryoshka_dpokhrel_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-bge_base_financial_matryoshka_dpokhrel_pipeline_en.md new file mode 100644 index 00000000000000..06fae92b9ee784 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bge_base_financial_matryoshka_dpokhrel_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English bge_base_financial_matryoshka_dpokhrel_pipeline pipeline BGEEmbeddings from dpokhrel +author: John Snow Labs +name: bge_base_financial_matryoshka_dpokhrel_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BGEEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bge_base_financial_matryoshka_dpokhrel_pipeline` is a English model originally trained by dpokhrel. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bge_base_financial_matryoshka_dpokhrel_pipeline_en_5.5.0_3.0_1725916762185.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bge_base_financial_matryoshka_dpokhrel_pipeline_en_5.5.0_3.0_1725916762185.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bge_base_financial_matryoshka_dpokhrel_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bge_base_financial_matryoshka_dpokhrel_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bge_base_financial_matryoshka_dpokhrel_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|387.2 MB| + +## References + +https://huggingface.co/dpokhrel/bge-base-financial-matryoshka + +## Included Models + +- DocumentAssembler +- BGEEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bidirection_translate_model_fixed_v0_4_en.md b/docs/_posts/ahmedlone127/2024-09-09-bidirection_translate_model_fixed_v0_4_en.md new file mode 100644 index 00000000000000..3a1231831617ed --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bidirection_translate_model_fixed_v0_4_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bidirection_translate_model_fixed_v0_4 MarianTransformer from gshields +author: John Snow Labs +name: bidirection_translate_model_fixed_v0_4 +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bidirection_translate_model_fixed_v0_4` is a English model originally trained by gshields. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bidirection_translate_model_fixed_v0_4_en_5.5.0_3.0_1725865049277.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bidirection_translate_model_fixed_v0_4_en_5.5.0_3.0_1725865049277.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("bidirection_translate_model_fixed_v0_4","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("bidirection_translate_model_fixed_v0_4","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bidirection_translate_model_fixed_v0_4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|518.5 MB| + +## References + +https://huggingface.co/gshields/bidirection_translate_model_fixed_v0.4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bio_gottbert_base_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-bio_gottbert_base_pipeline_en.md new file mode 100644 index 00000000000000..ac809ac24a9b80 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bio_gottbert_base_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bio_gottbert_base_pipeline pipeline RoBertaEmbeddings from SCAI-BIO +author: John Snow Labs +name: bio_gottbert_base_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bio_gottbert_base_pipeline` is a English model originally trained by SCAI-BIO. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bio_gottbert_base_pipeline_en_5.5.0_3.0_1725910313614.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bio_gottbert_base_pipeline_en_5.5.0_3.0_1725910313614.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bio_gottbert_base_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bio_gottbert_base_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bio_gottbert_base_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|470.5 MB| + +## References + +https://huggingface.co/SCAI-BIO/bio-gottbert-base + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-biomedroberta_finetuned_valid_testing_0_0002_32_en.md b/docs/_posts/ahmedlone127/2024-09-09-biomedroberta_finetuned_valid_testing_0_0002_32_en.md new file mode 100644 index 00000000000000..57c167b788bb99 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-biomedroberta_finetuned_valid_testing_0_0002_32_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English biomedroberta_finetuned_valid_testing_0_0002_32 RoBertaForTokenClassification from pabRomero +author: John Snow Labs +name: biomedroberta_finetuned_valid_testing_0_0002_32 +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`biomedroberta_finetuned_valid_testing_0_0002_32` is a English model originally trained by pabRomero. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/biomedroberta_finetuned_valid_testing_0_0002_32_en_5.5.0_3.0_1725915192483.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/biomedroberta_finetuned_valid_testing_0_0002_32_en_5.5.0_3.0_1725915192483.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = RoBertaForTokenClassification.pretrained("biomedroberta_finetuned_valid_testing_0_0002_32","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = RoBertaForTokenClassification.pretrained("biomedroberta_finetuned_valid_testing_0_0002_32", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|biomedroberta_finetuned_valid_testing_0_0002_32| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|466.3 MB| + +## References + +https://huggingface.co/pabRomero/BioMedRoBERTa-finetuned-valid-testing-0.0002-32 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-biomedroberta_finetuned_valid_testing_0_0002_32_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-biomedroberta_finetuned_valid_testing_0_0002_32_pipeline_en.md new file mode 100644 index 00000000000000..2d285e0b821188 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-biomedroberta_finetuned_valid_testing_0_0002_32_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English biomedroberta_finetuned_valid_testing_0_0002_32_pipeline pipeline RoBertaForTokenClassification from pabRomero +author: John Snow Labs +name: biomedroberta_finetuned_valid_testing_0_0002_32_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`biomedroberta_finetuned_valid_testing_0_0002_32_pipeline` is a English model originally trained by pabRomero. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/biomedroberta_finetuned_valid_testing_0_0002_32_pipeline_en_5.5.0_3.0_1725915216139.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/biomedroberta_finetuned_valid_testing_0_0002_32_pipeline_en_5.5.0_3.0_1725915216139.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("biomedroberta_finetuned_valid_testing_0_0002_32_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("biomedroberta_finetuned_valid_testing_0_0002_32_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|biomedroberta_finetuned_valid_testing_0_0002_32_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|466.3 MB| + +## References + +https://huggingface.co/pabRomero/BioMedRoBERTa-finetuned-valid-testing-0.0002-32 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-biomedroberta_full_finetuned_ner_pablo_en.md b/docs/_posts/ahmedlone127/2024-09-09-biomedroberta_full_finetuned_ner_pablo_en.md new file mode 100644 index 00000000000000..cc6db3f30b3ab7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-biomedroberta_full_finetuned_ner_pablo_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English biomedroberta_full_finetuned_ner_pablo RoBertaForTokenClassification from pabRomero +author: John Snow Labs +name: biomedroberta_full_finetuned_ner_pablo +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`biomedroberta_full_finetuned_ner_pablo` is a English model originally trained by pabRomero. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/biomedroberta_full_finetuned_ner_pablo_en_5.5.0_3.0_1725915458337.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/biomedroberta_full_finetuned_ner_pablo_en_5.5.0_3.0_1725915458337.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = RoBertaForTokenClassification.pretrained("biomedroberta_full_finetuned_ner_pablo","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = RoBertaForTokenClassification.pretrained("biomedroberta_full_finetuned_ner_pablo", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|biomedroberta_full_finetuned_ner_pablo| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|466.3 MB| + +## References + +https://huggingface.co/pabRomero/BioMedRoBERTa-full-finetuned-ner-pablo \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bkk_budget_ner_en.md b/docs/_posts/ahmedlone127/2024-09-09-bkk_budget_ner_en.md new file mode 100644 index 00000000000000..106bfeba5cde33 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bkk_budget_ner_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bkk_budget_ner XlmRoBertaForTokenClassification from napatswift +author: John Snow Labs +name: bkk_budget_ner +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bkk_budget_ner` is a English model originally trained by napatswift. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bkk_budget_ner_en_5.5.0_3.0_1725918682934.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bkk_budget_ner_en_5.5.0_3.0_1725918682934.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("bkk_budget_ner","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("bkk_budget_ner", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bkk_budget_ner| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|833.9 MB| + +## References + +https://huggingface.co/napatswift/bkk-budget-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-bkk_budget_ner_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-bkk_budget_ner_pipeline_en.md new file mode 100644 index 00000000000000..00794a14aaeef1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-bkk_budget_ner_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bkk_budget_ner_pipeline pipeline XlmRoBertaForTokenClassification from napatswift +author: John Snow Labs +name: bkk_budget_ner_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bkk_budget_ner_pipeline` is a English model originally trained by napatswift. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bkk_budget_ner_pipeline_en_5.5.0_3.0_1725918763723.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bkk_budget_ner_pipeline_en_5.5.0_3.0_1725918763723.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bkk_budget_ner_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bkk_budget_ner_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bkk_budget_ner_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|833.9 MB| + +## References + +https://huggingface.co/napatswift/bkk-budget-ner + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-boolq_microsoft_deberta_v3_large_seed_1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-boolq_microsoft_deberta_v3_large_seed_1_pipeline_en.md new file mode 100644 index 00000000000000..e0c9bfb574ac59 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-boolq_microsoft_deberta_v3_large_seed_1_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English boolq_microsoft_deberta_v3_large_seed_1_pipeline pipeline DeBertaForSequenceClassification from utahnlp +author: John Snow Labs +name: boolq_microsoft_deberta_v3_large_seed_1_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`boolq_microsoft_deberta_v3_large_seed_1_pipeline` is a English model originally trained by utahnlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/boolq_microsoft_deberta_v3_large_seed_1_pipeline_en_5.5.0_3.0_1725848689789.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/boolq_microsoft_deberta_v3_large_seed_1_pipeline_en_5.5.0_3.0_1725848689789.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("boolq_microsoft_deberta_v3_large_seed_1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("boolq_microsoft_deberta_v3_large_seed_1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|boolq_microsoft_deberta_v3_large_seed_1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.5 GB| + +## References + +https://huggingface.co/utahnlp/boolq_microsoft_deberta-v3-large_seed-1 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_eli5_mlm_model_venciso_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_eli5_mlm_model_venciso_pipeline_en.md new file mode 100644 index 00000000000000..becbe08065ca81 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_eli5_mlm_model_venciso_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_awesome_eli5_mlm_model_venciso_pipeline pipeline RoBertaEmbeddings from venciso +author: John Snow Labs +name: burmese_awesome_eli5_mlm_model_venciso_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_eli5_mlm_model_venciso_pipeline` is a English model originally trained by venciso. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_eli5_mlm_model_venciso_pipeline_en_5.5.0_3.0_1725909962001.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_eli5_mlm_model_venciso_pipeline_en_5.5.0_3.0_1725909962001.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_eli5_mlm_model_venciso_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_eli5_mlm_model_venciso_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_eli5_mlm_model_venciso_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|306.5 MB| + +## References + +https://huggingface.co/venciso/my_awesome_eli5_mlm_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_model_v2_en.md b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_model_v2_en.md new file mode 100644 index 00000000000000..7de7f35746e892 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_model_v2_en.md @@ -0,0 +1,98 @@ +--- +layout: model +title: English burmese_awesome_model_v2 DistilBertForSequenceClassification from rarisenpai +author: John Snow Labs +name: burmese_awesome_model_v2 +date: 2024-09-09 +tags: [bert, en, open_source, sequence_classification, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_model_v2` is a English model originally trained by rarisenpai. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_model_v2_en_5.5.0_3.0_1725918454171.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_model_v2_en_5.5.0_3.0_1725918454171.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +document_assembler = DocumentAssembler()\ + .setInputCol("text")\ + .setOutputCol("document") + +tokenizer = Tokenizer()\ + .setInputCols("document")\ + .setOutputCol("token") + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("burmese_awesome_model_v2","en")\ + .setInputCols(["document","token"])\ + .setOutputCol("class") + +pipeline = Pipeline().setStages([document_assembler, tokenizer, sequenceClassifier]) + +data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val document_assembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("burmese_awesome_model_v2","en") + .setInputCols(Array("document","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) + +val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text") + +val result = pipeline.fit(data).transform(data) +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_model_v2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|811.6 MB| + +## References + +References + +https://huggingface.co/rarisenpai/my-awesome-model_v2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_model_v2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_model_v2_pipeline_en.md new file mode 100644 index 00000000000000..1f94ad2eefbbe6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_model_v2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_awesome_model_v2_pipeline pipeline XlmRoBertaForTokenClassification from lilyyellow +author: John Snow Labs +name: burmese_awesome_model_v2_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_model_v2_pipeline` is a English model originally trained by lilyyellow. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_model_v2_pipeline_en_5.5.0_3.0_1725918574952.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_model_v2_pipeline_en_5.5.0_3.0_1725918574952.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_model_v2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_model_v2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_model_v2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|811.6 MB| + +## References + +https://huggingface.co/lilyyellow/my_awesome_model_v2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_qa_model_10_en.md b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_qa_model_10_en.md new file mode 100644 index 00000000000000..c27884dbef8d18 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_qa_model_10_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_10 DistilBertForQuestionAnswering from limaatulya +author: John Snow Labs +name: burmese_awesome_qa_model_10 +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_10` is a English model originally trained by limaatulya. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_10_en_5.5.0_3.0_1725877250654.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_10_en_5.5.0_3.0_1725877250654.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_10","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_10", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_10| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/limaatulya/my_awesome_qa_model_10 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_qa_model_4_en.md b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_qa_model_4_en.md new file mode 100644 index 00000000000000..521c638eab574d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_qa_model_4_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_4 DistilBertForQuestionAnswering from limaatulya +author: John Snow Labs +name: burmese_awesome_qa_model_4 +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_4` is a English model originally trained by limaatulya. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_4_en_5.5.0_3.0_1725876941862.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_4_en_5.5.0_3.0_1725876941862.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_4","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_4", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/limaatulya/my_awesome_qa_model_4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_qa_model_acezxn_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_qa_model_acezxn_pipeline_en.md new file mode 100644 index 00000000000000..1fb0be22037e6f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_qa_model_acezxn_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_awesome_qa_model_acezxn_pipeline pipeline DistilBertForQuestionAnswering from acezxn +author: John Snow Labs +name: burmese_awesome_qa_model_acezxn_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_acezxn_pipeline` is a English model originally trained by acezxn. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_acezxn_pipeline_en_5.5.0_3.0_1725876833134.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_acezxn_pipeline_en_5.5.0_3.0_1725876833134.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_qa_model_acezxn_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_qa_model_acezxn_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_acezxn_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/acezxn/my_awesome_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_qa_model_kasunrajitha_en.md b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_qa_model_kasunrajitha_en.md new file mode 100644 index 00000000000000..c8eb52115a2c2b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_qa_model_kasunrajitha_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_kasunrajitha DistilBertForQuestionAnswering from KasunRajitha +author: John Snow Labs +name: burmese_awesome_qa_model_kasunrajitha +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_kasunrajitha` is a English model originally trained by KasunRajitha. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_kasunrajitha_en_5.5.0_3.0_1725892425846.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_kasunrajitha_en_5.5.0_3.0_1725892425846.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_kasunrajitha","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_kasunrajitha", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_kasunrajitha| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/KasunRajitha/my_awesome_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_token_classification_v2_1_1_en.md b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_token_classification_v2_1_1_en.md new file mode 100644 index 00000000000000..87a45770181637 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-burmese_awesome_token_classification_v2_1_1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_awesome_token_classification_v2_1_1 XlmRoBertaForTokenClassification from lilyyellow +author: John Snow Labs +name: burmese_awesome_token_classification_v2_1_1 +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_token_classification_v2_1_1` is a English model originally trained by lilyyellow. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_token_classification_v2_1_1_en_5.5.0_3.0_1725895002019.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_token_classification_v2_1_1_en_5.5.0_3.0_1725895002019.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("burmese_awesome_token_classification_v2_1_1","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("burmese_awesome_token_classification_v2_1_1", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_token_classification_v2_1_1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|830.8 MB| + +## References + +https://huggingface.co/lilyyellow/my_awesome_token_classification_v2.1.1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-camembert_base_masked_lm_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-camembert_base_masked_lm_pipeline_en.md new file mode 100644 index 00000000000000..dfe208eade76f0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-camembert_base_masked_lm_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English camembert_base_masked_lm_pipeline pipeline CamemBertEmbeddings from talkingscott +author: John Snow Labs +name: camembert_base_masked_lm_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`camembert_base_masked_lm_pipeline` is a English model originally trained by talkingscott. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/camembert_base_masked_lm_pipeline_en_5.5.0_3.0_1725898514491.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/camembert_base_masked_lm_pipeline_en_5.5.0_3.0_1725898514491.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("camembert_base_masked_lm_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("camembert_base_masked_lm_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|camembert_base_masked_lm_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/talkingscott/camembert-base-masked-lm + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-cat_sayula_popoluca_iw_2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-cat_sayula_popoluca_iw_2_pipeline_en.md new file mode 100644 index 00000000000000..8d342cab0611bc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-cat_sayula_popoluca_iw_2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English cat_sayula_popoluca_iw_2_pipeline pipeline XlmRoBertaForTokenClassification from homersimpson +author: John Snow Labs +name: cat_sayula_popoluca_iw_2_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cat_sayula_popoluca_iw_2_pipeline` is a English model originally trained by homersimpson. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cat_sayula_popoluca_iw_2_pipeline_en_5.5.0_3.0_1725918325827.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cat_sayula_popoluca_iw_2_pipeline_en_5.5.0_3.0_1725918325827.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("cat_sayula_popoluca_iw_2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("cat_sayula_popoluca_iw_2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cat_sayula_popoluca_iw_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|424.4 MB| + +## References + +https://huggingface.co/homersimpson/cat-pos-iw-2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-chai_deberta_v3_base_reward_model_en.md b/docs/_posts/ahmedlone127/2024-09-09-chai_deberta_v3_base_reward_model_en.md new file mode 100644 index 00000000000000..2b1b582151a1fe --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-chai_deberta_v3_base_reward_model_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English chai_deberta_v3_base_reward_model DeBertaForSequenceClassification from decem +author: John Snow Labs +name: chai_deberta_v3_base_reward_model +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, deberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`chai_deberta_v3_base_reward_model` is a English model originally trained by decem. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/chai_deberta_v3_base_reward_model_en_5.5.0_3.0_1725859802052.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/chai_deberta_v3_base_reward_model_en_5.5.0_3.0_1725859802052.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DeBertaForSequenceClassification.pretrained("chai_deberta_v3_base_reward_model","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DeBertaForSequenceClassification.pretrained("chai_deberta_v3_base_reward_model", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|chai_deberta_v3_base_reward_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|634.4 MB| + +## References + +https://huggingface.co/decem/chai-deberta-v3-base-reward-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-chnsenticorp_xlm_r_en.md b/docs/_posts/ahmedlone127/2024-09-09-chnsenticorp_xlm_r_en.md new file mode 100644 index 00000000000000..f30f4ad5bdbc51 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-chnsenticorp_xlm_r_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English chnsenticorp_xlm_r XlmRoBertaForSequenceClassification from Cincin-nvp +author: John Snow Labs +name: chnsenticorp_xlm_r +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`chnsenticorp_xlm_r` is a English model originally trained by Cincin-nvp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/chnsenticorp_xlm_r_en_5.5.0_3.0_1725908001685.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/chnsenticorp_xlm_r_en_5.5.0_3.0_1725908001685.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("chnsenticorp_xlm_r","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("chnsenticorp_xlm_r", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|chnsenticorp_xlm_r| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|850.8 MB| + +## References + +https://huggingface.co/Cincin-nvp/ChnSentiCorp_XLM-R \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-choubert_16_pipeline_fr.md b/docs/_posts/ahmedlone127/2024-09-09-choubert_16_pipeline_fr.md new file mode 100644 index 00000000000000..06a1fc0f8ec6f0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-choubert_16_pipeline_fr.md @@ -0,0 +1,70 @@ +--- +layout: model +title: French choubert_16_pipeline pipeline CamemBertEmbeddings from ChouBERT +author: John Snow Labs +name: choubert_16_pipeline +date: 2024-09-09 +tags: [fr, open_source, pipeline, onnx] +task: Embeddings +language: fr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`choubert_16_pipeline` is a French model originally trained by ChouBERT. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/choubert_16_pipeline_fr_5.5.0_3.0_1725898452923.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/choubert_16_pipeline_fr_5.5.0_3.0_1725898452923.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("choubert_16_pipeline", lang = "fr") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("choubert_16_pipeline", lang = "fr") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|choubert_16_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|fr| +|Size:|412.8 MB| + +## References + +https://huggingface.co/ChouBERT/ChouBERT-16 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-cino_small_v2_tncc_document_tsheg_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-cino_small_v2_tncc_document_tsheg_pipeline_en.md new file mode 100644 index 00000000000000..a90387871dad73 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-cino_small_v2_tncc_document_tsheg_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English cino_small_v2_tncc_document_tsheg_pipeline pipeline XlmRoBertaForSequenceClassification from UTibetNLP +author: John Snow Labs +name: cino_small_v2_tncc_document_tsheg_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cino_small_v2_tncc_document_tsheg_pipeline` is a English model originally trained by UTibetNLP. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cino_small_v2_tncc_document_tsheg_pipeline_en_5.5.0_3.0_1725906283691.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cino_small_v2_tncc_document_tsheg_pipeline_en_5.5.0_3.0_1725906283691.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("cino_small_v2_tncc_document_tsheg_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("cino_small_v2_tncc_document_tsheg_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cino_small_v2_tncc_document_tsheg_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|552.9 MB| + +## References + +https://huggingface.co/UTibetNLP/cino-small-v2_tncc-document_tsheg + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-clas_4_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-clas_4_pipeline_en.md new file mode 100644 index 00000000000000..6c12c392aa4659 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-clas_4_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English clas_4_pipeline pipeline RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: clas_4_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`clas_4_pipeline` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/clas_4_pipeline_en_5.5.0_3.0_1725904393979.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/clas_4_pipeline_en_5.5.0_3.0_1725904393979.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("clas_4_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("clas_4_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|clas_4_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Clas_4 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-clinicalbert_en.md b/docs/_posts/ahmedlone127/2024-09-09-clinicalbert_en.md new file mode 100644 index 00000000000000..153ca47a7eb7fe --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-clinicalbert_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English clinicalbert DistilBertEmbeddings from DHEIVER +author: John Snow Labs +name: clinicalbert +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`clinicalbert` is a English model originally trained by DHEIVER. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/clinicalbert_en_5.5.0_3.0_1725921709312.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/clinicalbert_en_5.5.0_3.0_1725921709312.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("clinicalbert","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("clinicalbert","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|clinicalbert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|505.3 MB| + +## References + +https://huggingface.co/DHEIVER/ClinicalBERT \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-codebert_trbcb_chaft_en.md b/docs/_posts/ahmedlone127/2024-09-09-codebert_trbcb_chaft_en.md new file mode 100644 index 00000000000000..f4ea6e97fe02a4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-codebert_trbcb_chaft_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English codebert_trbcb_chaft RoBertaForSequenceClassification from buelfhood +author: John Snow Labs +name: codebert_trbcb_chaft +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`codebert_trbcb_chaft` is a English model originally trained by buelfhood. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/codebert_trbcb_chaft_en_5.5.0_3.0_1725911913659.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/codebert_trbcb_chaft_en_5.5.0_3.0_1725911913659.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("codebert_trbcb_chaft","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("codebert_trbcb_chaft", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|codebert_trbcb_chaft| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.3 MB| + +## References + +https://huggingface.co/buelfhood/CodeBERT_TrBCB_ChaFT \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-codebert_trbcb_chaft_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-codebert_trbcb_chaft_pipeline_en.md new file mode 100644 index 00000000000000..4792297f3c2119 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-codebert_trbcb_chaft_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English codebert_trbcb_chaft_pipeline pipeline RoBertaForSequenceClassification from buelfhood +author: John Snow Labs +name: codebert_trbcb_chaft_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`codebert_trbcb_chaft_pipeline` is a English model originally trained by buelfhood. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/codebert_trbcb_chaft_pipeline_en_5.5.0_3.0_1725911936085.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/codebert_trbcb_chaft_pipeline_en_5.5.0_3.0_1725911936085.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("codebert_trbcb_chaft_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("codebert_trbcb_chaft_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|codebert_trbcb_chaft_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.4 MB| + +## References + +https://huggingface.co/buelfhood/CodeBERT_TrBCB_ChaFT + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-cold_fusion_itr22_seed0_en.md b/docs/_posts/ahmedlone127/2024-09-09-cold_fusion_itr22_seed0_en.md new file mode 100644 index 00000000000000..d4f807ab1150a0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-cold_fusion_itr22_seed0_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English cold_fusion_itr22_seed0 RoBertaForSequenceClassification from ibm +author: John Snow Labs +name: cold_fusion_itr22_seed0 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cold_fusion_itr22_seed0` is a English model originally trained by ibm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cold_fusion_itr22_seed0_en_5.5.0_3.0_1725919959337.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cold_fusion_itr22_seed0_en_5.5.0_3.0_1725919959337.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("cold_fusion_itr22_seed0","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("cold_fusion_itr22_seed0", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cold_fusion_itr22_seed0| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.0 MB| + +## References + +https://huggingface.co/ibm/ColD-Fusion-itr22-seed0 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-cold_fusion_itr22_seed4_en.md b/docs/_posts/ahmedlone127/2024-09-09-cold_fusion_itr22_seed4_en.md new file mode 100644 index 00000000000000..abbc5e6add0021 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-cold_fusion_itr22_seed4_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English cold_fusion_itr22_seed4 RoBertaForSequenceClassification from ibm +author: John Snow Labs +name: cold_fusion_itr22_seed4 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cold_fusion_itr22_seed4` is a English model originally trained by ibm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cold_fusion_itr22_seed4_en_5.5.0_3.0_1725920420211.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cold_fusion_itr22_seed4_en_5.5.0_3.0_1725920420211.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("cold_fusion_itr22_seed4","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("cold_fusion_itr22_seed4", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cold_fusion_itr22_seed4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.0 MB| + +## References + +https://huggingface.co/ibm/ColD-Fusion-itr22-seed4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-complaints_classifier_arielkanevsky_en.md b/docs/_posts/ahmedlone127/2024-09-09-complaints_classifier_arielkanevsky_en.md new file mode 100644 index 00000000000000..5dfca173ae3555 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-complaints_classifier_arielkanevsky_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English complaints_classifier_arielkanevsky DistilBertForSequenceClassification from Arielkanevsky +author: John Snow Labs +name: complaints_classifier_arielkanevsky +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`complaints_classifier_arielkanevsky` is a English model originally trained by Arielkanevsky. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/complaints_classifier_arielkanevsky_en_5.5.0_3.0_1725873264340.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/complaints_classifier_arielkanevsky_en_5.5.0_3.0_1725873264340.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("complaints_classifier_arielkanevsky","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("complaints_classifier_arielkanevsky", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|complaints_classifier_arielkanevsky| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/Arielkanevsky/Complaints_Classifier \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-complaints_classifier_arielkanevsky_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-complaints_classifier_arielkanevsky_pipeline_en.md new file mode 100644 index 00000000000000..4064cff1e6b556 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-complaints_classifier_arielkanevsky_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English complaints_classifier_arielkanevsky_pipeline pipeline DistilBertForSequenceClassification from Arielkanevsky +author: John Snow Labs +name: complaints_classifier_arielkanevsky_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`complaints_classifier_arielkanevsky_pipeline` is a English model originally trained by Arielkanevsky. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/complaints_classifier_arielkanevsky_pipeline_en_5.5.0_3.0_1725873276107.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/complaints_classifier_arielkanevsky_pipeline_en_5.5.0_3.0_1725873276107.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("complaints_classifier_arielkanevsky_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("complaints_classifier_arielkanevsky_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|complaints_classifier_arielkanevsky_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/Arielkanevsky/Complaints_Classifier + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-covid_tweet_sentiment_analyzer_roberta_en.md b/docs/_posts/ahmedlone127/2024-09-09-covid_tweet_sentiment_analyzer_roberta_en.md new file mode 100644 index 00000000000000..bba4c51e9b7b2a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-covid_tweet_sentiment_analyzer_roberta_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English covid_tweet_sentiment_analyzer_roberta RoBertaForSequenceClassification from KwameOO +author: John Snow Labs +name: covid_tweet_sentiment_analyzer_roberta +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`covid_tweet_sentiment_analyzer_roberta` is a English model originally trained by KwameOO. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/covid_tweet_sentiment_analyzer_roberta_en_5.5.0_3.0_1725902820809.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/covid_tweet_sentiment_analyzer_roberta_en_5.5.0_3.0_1725902820809.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("covid_tweet_sentiment_analyzer_roberta","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("covid_tweet_sentiment_analyzer_roberta", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|covid_tweet_sentiment_analyzer_roberta| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.1 MB| + +## References + +https://huggingface.co/KwameOO/covid-tweet-sentiment-analyzer-roberta \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-cross_all_bs320_vanilla_finetuned_webnlg2020_correctness_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-cross_all_bs320_vanilla_finetuned_webnlg2020_correctness_pipeline_en.md new file mode 100644 index 00000000000000..67a64c34843b39 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-cross_all_bs320_vanilla_finetuned_webnlg2020_correctness_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English cross_all_bs320_vanilla_finetuned_webnlg2020_correctness_pipeline pipeline MPNetEmbeddings from teven +author: John Snow Labs +name: cross_all_bs320_vanilla_finetuned_webnlg2020_correctness_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cross_all_bs320_vanilla_finetuned_webnlg2020_correctness_pipeline` is a English model originally trained by teven. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cross_all_bs320_vanilla_finetuned_webnlg2020_correctness_pipeline_en_5.5.0_3.0_1725897000286.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cross_all_bs320_vanilla_finetuned_webnlg2020_correctness_pipeline_en_5.5.0_3.0_1725897000286.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("cross_all_bs320_vanilla_finetuned_webnlg2020_correctness_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("cross_all_bs320_vanilla_finetuned_webnlg2020_correctness_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cross_all_bs320_vanilla_finetuned_webnlg2020_correctness_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|407.3 MB| + +## References + +https://huggingface.co/teven/cross_all_bs320_vanilla_finetuned_WebNLG2020_correctness + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-csce5218_01percent_en.md b/docs/_posts/ahmedlone127/2024-09-09-csce5218_01percent_en.md new file mode 100644 index 00000000000000..a5daeee2233e46 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-csce5218_01percent_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English csce5218_01percent RoBertaForSequenceClassification from HanzhiZhang +author: John Snow Labs +name: csce5218_01percent +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`csce5218_01percent` is a English model originally trained by HanzhiZhang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/csce5218_01percent_en_5.5.0_3.0_1725903351258.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/csce5218_01percent_en_5.5.0_3.0_1725903351258.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("csce5218_01percent","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("csce5218_01percent", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|csce5218_01percent| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|452.0 MB| + +## References + +https://huggingface.co/HanzhiZhang/CSCE5218_01percent \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-csce5218_01percent_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-csce5218_01percent_pipeline_en.md new file mode 100644 index 00000000000000..2f97642a88dc6c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-csce5218_01percent_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English csce5218_01percent_pipeline pipeline RoBertaForSequenceClassification from HanzhiZhang +author: John Snow Labs +name: csce5218_01percent_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`csce5218_01percent_pipeline` is a English model originally trained by HanzhiZhang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/csce5218_01percent_pipeline_en_5.5.0_3.0_1725903379001.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/csce5218_01percent_pipeline_en_5.5.0_3.0_1725903379001.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("csce5218_01percent_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("csce5218_01percent_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|csce5218_01percent_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|452.0 MB| + +## References + +https://huggingface.co/HanzhiZhang/CSCE5218_01percent + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-custom_dataset_finetuned_en.md b/docs/_posts/ahmedlone127/2024-09-09-custom_dataset_finetuned_en.md new file mode 100644 index 00000000000000..d6b20b3bd1c2bb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-custom_dataset_finetuned_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English custom_dataset_finetuned DistilBertEmbeddings from Sandy1857 +author: John Snow Labs +name: custom_dataset_finetuned +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`custom_dataset_finetuned` is a English model originally trained by Sandy1857. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/custom_dataset_finetuned_en_5.5.0_3.0_1725921518126.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/custom_dataset_finetuned_en_5.5.0_3.0_1725921518126.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("custom_dataset_finetuned","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("custom_dataset_finetuned","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|custom_dataset_finetuned| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Sandy1857/custom-dataset-finetuned \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-danish_distilbert_base_uncased_rnribeiro_en.md b/docs/_posts/ahmedlone127/2024-09-09-danish_distilbert_base_uncased_rnribeiro_en.md new file mode 100644 index 00000000000000..356b1415b31eeb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-danish_distilbert_base_uncased_rnribeiro_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English danish_distilbert_base_uncased_rnribeiro DistilBertEmbeddings from rnribeiro +author: John Snow Labs +name: danish_distilbert_base_uncased_rnribeiro +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`danish_distilbert_base_uncased_rnribeiro` is a English model originally trained by rnribeiro. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/danish_distilbert_base_uncased_rnribeiro_en_5.5.0_3.0_1725909531078.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/danish_distilbert_base_uncased_rnribeiro_en_5.5.0_3.0_1725909531078.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("danish_distilbert_base_uncased_rnribeiro","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("danish_distilbert_base_uncased_rnribeiro","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|danish_distilbert_base_uncased_rnribeiro| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/rnribeiro/DA-distilbert-base-uncased \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-danish_distilbert_base_uncased_rnribeiro_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-danish_distilbert_base_uncased_rnribeiro_pipeline_en.md new file mode 100644 index 00000000000000..ab481638c3a8ca --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-danish_distilbert_base_uncased_rnribeiro_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English danish_distilbert_base_uncased_rnribeiro_pipeline pipeline DistilBertEmbeddings from rnribeiro +author: John Snow Labs +name: danish_distilbert_base_uncased_rnribeiro_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`danish_distilbert_base_uncased_rnribeiro_pipeline` is a English model originally trained by rnribeiro. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/danish_distilbert_base_uncased_rnribeiro_pipeline_en_5.5.0_3.0_1725909543584.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/danish_distilbert_base_uncased_rnribeiro_pipeline_en_5.5.0_3.0_1725909543584.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("danish_distilbert_base_uncased_rnribeiro_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("danish_distilbert_base_uncased_rnribeiro_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|danish_distilbert_base_uncased_rnribeiro_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/rnribeiro/DA-distilbert-base-uncased + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dasvny_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dasvny_pipeline_en.md new file mode 100644 index 00000000000000..67fdf820e64989 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dasvny_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dasvny_pipeline pipeline MarianTransformer from Erda +author: John Snow Labs +name: dasvny_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dasvny_pipeline` is a English model originally trained by Erda. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dasvny_pipeline_en_5.5.0_3.0_1725891865519.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dasvny_pipeline_en_5.5.0_3.0_1725891865519.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dasvny_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dasvny_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dasvny_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|505.2 MB| + +## References + +https://huggingface.co/Erda/dasvNy + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_base_4bit_64rank_backbone_en.md b/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_base_4bit_64rank_backbone_en.md new file mode 100644 index 00000000000000..59e5be7353f23c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_base_4bit_64rank_backbone_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English deberta_v3_base_4bit_64rank_backbone DeBertaForSequenceClassification from yxli2123 +author: John Snow Labs +name: deberta_v3_base_4bit_64rank_backbone +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, deberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deberta_v3_base_4bit_64rank_backbone` is a English model originally trained by yxli2123. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_v3_base_4bit_64rank_backbone_en_5.5.0_3.0_1725859573599.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_v3_base_4bit_64rank_backbone_en_5.5.0_3.0_1725859573599.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DeBertaForSequenceClassification.pretrained("deberta_v3_base_4bit_64rank_backbone","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DeBertaForSequenceClassification.pretrained("deberta_v3_base_4bit_64rank_backbone", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deberta_v3_base_4bit_64rank_backbone| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|689.9 MB| + +## References + +https://huggingface.co/yxli2123/deberta-v3-base-4bit-64rank-backbone \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_base_finetuned_uf_ner_6x_0type_v1_en.md b/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_base_finetuned_uf_ner_6x_0type_v1_en.md new file mode 100644 index 00000000000000..2e24152e35514e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_base_finetuned_uf_ner_6x_0type_v1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English deberta_v3_base_finetuned_uf_ner_6x_0type_v1 DeBertaForSequenceClassification from mariolinml +author: John Snow Labs +name: deberta_v3_base_finetuned_uf_ner_6x_0type_v1 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, deberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deberta_v3_base_finetuned_uf_ner_6x_0type_v1` is a English model originally trained by mariolinml. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_v3_base_finetuned_uf_ner_6x_0type_v1_en_5.5.0_3.0_1725859022033.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_v3_base_finetuned_uf_ner_6x_0type_v1_en_5.5.0_3.0_1725859022033.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DeBertaForSequenceClassification.pretrained("deberta_v3_base_finetuned_uf_ner_6x_0type_v1","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DeBertaForSequenceClassification.pretrained("deberta_v3_base_finetuned_uf_ner_6x_0type_v1", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deberta_v3_base_finetuned_uf_ner_6x_0type_v1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|577.8 MB| + +## References + +https://huggingface.co/mariolinml/deberta-v3-base-finetuned-uf-ner-6X-0type_v1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_base_glue_sst2_en.md b/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_base_glue_sst2_en.md new file mode 100644 index 00000000000000..f735013dd3a717 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_base_glue_sst2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English deberta_v3_base_glue_sst2 DeBertaForSequenceClassification from ficsort +author: John Snow Labs +name: deberta_v3_base_glue_sst2 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, deberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deberta_v3_base_glue_sst2` is a English model originally trained by ficsort. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_v3_base_glue_sst2_en_5.5.0_3.0_1725848995900.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_v3_base_glue_sst2_en_5.5.0_3.0_1725848995900.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DeBertaForSequenceClassification.pretrained("deberta_v3_base_glue_sst2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DeBertaForSequenceClassification.pretrained("deberta_v3_base_glue_sst2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deberta_v3_base_glue_sst2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|583.7 MB| + +## References + +https://huggingface.co/ficsort/deberta-v3-base-glue-sst2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_base_hate_speech_offensive_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_base_hate_speech_offensive_pipeline_en.md new file mode 100644 index 00000000000000..d5eb512fad464d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_base_hate_speech_offensive_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English deberta_v3_base_hate_speech_offensive_pipeline pipeline DeBertaForSequenceClassification from kietnt0603 +author: John Snow Labs +name: deberta_v3_base_hate_speech_offensive_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deberta_v3_base_hate_speech_offensive_pipeline` is a English model originally trained by kietnt0603. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_v3_base_hate_speech_offensive_pipeline_en_5.5.0_3.0_1725880154075.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_v3_base_hate_speech_offensive_pipeline_en_5.5.0_3.0_1725880154075.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("deberta_v3_base_hate_speech_offensive_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("deberta_v3_base_hate_speech_offensive_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deberta_v3_base_hate_speech_offensive_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|593.3 MB| + +## References + +https://huggingface.co/kietnt0603/deberta-v3-base-hate-speech-offensive + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_large_survey_nepal_bhasa_fact_main_passage_rater_all_gpt4_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_large_survey_nepal_bhasa_fact_main_passage_rater_all_gpt4_pipeline_en.md new file mode 100644 index 00000000000000..889f5f54328762 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-deberta_v3_large_survey_nepal_bhasa_fact_main_passage_rater_all_gpt4_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English deberta_v3_large_survey_nepal_bhasa_fact_main_passage_rater_all_gpt4_pipeline pipeline DeBertaForSequenceClassification from domenicrosati +author: John Snow Labs +name: deberta_v3_large_survey_nepal_bhasa_fact_main_passage_rater_all_gpt4_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deberta_v3_large_survey_nepal_bhasa_fact_main_passage_rater_all_gpt4_pipeline` is a English model originally trained by domenicrosati. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_v3_large_survey_nepal_bhasa_fact_main_passage_rater_all_gpt4_pipeline_en_5.5.0_3.0_1725850193029.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_v3_large_survey_nepal_bhasa_fact_main_passage_rater_all_gpt4_pipeline_en_5.5.0_3.0_1725850193029.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("deberta_v3_large_survey_nepal_bhasa_fact_main_passage_rater_all_gpt4_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("deberta_v3_large_survey_nepal_bhasa_fact_main_passage_rater_all_gpt4_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deberta_v3_large_survey_nepal_bhasa_fact_main_passage_rater_all_gpt4_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.5 GB| + +## References + +https://huggingface.co/domenicrosati/deberta-v3-large-survey-new_fact_main_passage-rater-all-gpt4 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-debertaemotionbalanced_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-debertaemotionbalanced_pipeline_en.md new file mode 100644 index 00000000000000..c1260f5357b09b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-debertaemotionbalanced_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English debertaemotionbalanced_pipeline pipeline DeBertaForSequenceClassification from aliciiavs +author: John Snow Labs +name: debertaemotionbalanced_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`debertaemotionbalanced_pipeline` is a English model originally trained by aliciiavs. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/debertaemotionbalanced_pipeline_en_5.5.0_3.0_1725859792180.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/debertaemotionbalanced_pipeline_en_5.5.0_3.0_1725859792180.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("debertaemotionbalanced_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("debertaemotionbalanced_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|debertaemotionbalanced_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|433.7 MB| + +## References + +https://huggingface.co/aliciiavs/debertaemotionbalanced + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-deneme_model_eng_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-deneme_model_eng_pipeline_en.md new file mode 100644 index 00000000000000..da9a7e1c0a0231 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-deneme_model_eng_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English deneme_model_eng_pipeline pipeline DistilBertForQuestionAnswering from yegokpinar +author: John Snow Labs +name: deneme_model_eng_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deneme_model_eng_pipeline` is a English model originally trained by yegokpinar. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deneme_model_eng_pipeline_en_5.5.0_3.0_1725876830551.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deneme_model_eng_pipeline_en_5.5.0_3.0_1725876830551.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("deneme_model_eng_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("deneme_model_eng_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deneme_model_eng_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/yegokpinar/deneme_model_eng + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-deprem_mdeberta_binary_pipeline_tr.md b/docs/_posts/ahmedlone127/2024-09-09-deprem_mdeberta_binary_pipeline_tr.md new file mode 100644 index 00000000000000..a996376f58ac43 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-deprem_mdeberta_binary_pipeline_tr.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Turkish deprem_mdeberta_binary_pipeline pipeline DeBertaForSequenceClassification from ctoraman +author: John Snow Labs +name: deprem_mdeberta_binary_pipeline +date: 2024-09-09 +tags: [tr, open_source, pipeline, onnx] +task: Text Classification +language: tr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`deprem_mdeberta_binary_pipeline` is a Turkish model originally trained by ctoraman. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deprem_mdeberta_binary_pipeline_tr_5.5.0_3.0_1725880609848.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deprem_mdeberta_binary_pipeline_tr_5.5.0_3.0_1725880609848.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("deprem_mdeberta_binary_pipeline", lang = "tr") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("deprem_mdeberta_binary_pipeline", lang = "tr") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|deprem_mdeberta_binary_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|tr| +|Size:|791.2 MB| + +## References + +https://huggingface.co/ctoraman/deprem-mdeberta-binary + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-did_the_doctor_say_hello_bert_first128_en.md b/docs/_posts/ahmedlone127/2024-09-09-did_the_doctor_say_hello_bert_first128_en.md new file mode 100644 index 00000000000000..714f3c8ddc339c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-did_the_doctor_say_hello_bert_first128_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English did_the_doctor_say_hello_bert_first128 BertForSequenceClassification from etadevosyan +author: John Snow Labs +name: did_the_doctor_say_hello_bert_first128 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`did_the_doctor_say_hello_bert_first128` is a English model originally trained by etadevosyan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/did_the_doctor_say_hello_bert_first128_en_5.5.0_3.0_1725900364162.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/did_the_doctor_say_hello_bert_first128_en_5.5.0_3.0_1725900364162.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("did_the_doctor_say_hello_bert_first128","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("did_the_doctor_say_hello_bert_first128", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|did_the_doctor_say_hello_bert_first128| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|666.5 MB| + +## References + +https://huggingface.co/etadevosyan/did_the_doctor_say_hello_bert_First128 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-did_the_doctor_say_the_nurse_will_contact_the_patient_in_the_chat_room_bert_last512_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-did_the_doctor_say_the_nurse_will_contact_the_patient_in_the_chat_room_bert_last512_pipeline_en.md new file mode 100644 index 00000000000000..636c39237c5a34 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-did_the_doctor_say_the_nurse_will_contact_the_patient_in_the_chat_room_bert_last512_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English did_the_doctor_say_the_nurse_will_contact_the_patient_in_the_chat_room_bert_last512_pipeline pipeline BertForSequenceClassification from etadevosyan +author: John Snow Labs +name: did_the_doctor_say_the_nurse_will_contact_the_patient_in_the_chat_room_bert_last512_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`did_the_doctor_say_the_nurse_will_contact_the_patient_in_the_chat_room_bert_last512_pipeline` is a English model originally trained by etadevosyan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/did_the_doctor_say_the_nurse_will_contact_the_patient_in_the_chat_room_bert_last512_pipeline_en_5.5.0_3.0_1725900998719.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/did_the_doctor_say_the_nurse_will_contact_the_patient_in_the_chat_room_bert_last512_pipeline_en_5.5.0_3.0_1725900998719.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("did_the_doctor_say_the_nurse_will_contact_the_patient_in_the_chat_room_bert_last512_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("did_the_doctor_say_the_nurse_will_contact_the_patient_in_the_chat_room_bert_last512_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|did_the_doctor_say_the_nurse_will_contact_the_patient_in_the_chat_room_bert_last512_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|666.5 MB| + +## References + +https://huggingface.co/etadevosyan/did_the_doctor_say_the_nurse_will_contact_the_patient_in_the_chat_room_bert_Last512 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dilemmas_roberta_text_disagreement_predictor_en.md b/docs/_posts/ahmedlone127/2024-09-09-dilemmas_roberta_text_disagreement_predictor_en.md new file mode 100644 index 00000000000000..66599045af482a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dilemmas_roberta_text_disagreement_predictor_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dilemmas_roberta_text_disagreement_predictor RoBertaForSequenceClassification from RuyuanWan +author: John Snow Labs +name: dilemmas_roberta_text_disagreement_predictor +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dilemmas_roberta_text_disagreement_predictor` is a English model originally trained by RuyuanWan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dilemmas_roberta_text_disagreement_predictor_en_5.5.0_3.0_1725920038592.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dilemmas_roberta_text_disagreement_predictor_en_5.5.0_3.0_1725920038592.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("dilemmas_roberta_text_disagreement_predictor","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("dilemmas_roberta_text_disagreement_predictor", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dilemmas_roberta_text_disagreement_predictor| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|431.4 MB| + +## References + +https://huggingface.co/RuyuanWan/Dilemmas_RoBERTa_Text_Disagreement_Predictor \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dilemmas_roberta_text_disagreement_predictor_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dilemmas_roberta_text_disagreement_predictor_pipeline_en.md new file mode 100644 index 00000000000000..99dfd1edff4909 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dilemmas_roberta_text_disagreement_predictor_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dilemmas_roberta_text_disagreement_predictor_pipeline pipeline RoBertaForSequenceClassification from RuyuanWan +author: John Snow Labs +name: dilemmas_roberta_text_disagreement_predictor_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dilemmas_roberta_text_disagreement_predictor_pipeline` is a English model originally trained by RuyuanWan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dilemmas_roberta_text_disagreement_predictor_pipeline_en_5.5.0_3.0_1725920077552.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dilemmas_roberta_text_disagreement_predictor_pipeline_en_5.5.0_3.0_1725920077552.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dilemmas_roberta_text_disagreement_predictor_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dilemmas_roberta_text_disagreement_predictor_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dilemmas_roberta_text_disagreement_predictor_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|431.4 MB| + +## References + +https://huggingface.co/RuyuanWan/Dilemmas_RoBERTa_Text_Disagreement_Predictor + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-discord_twitter_distilbert_en.md b/docs/_posts/ahmedlone127/2024-09-09-discord_twitter_distilbert_en.md new file mode 100644 index 00000000000000..fc84c7313afe06 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-discord_twitter_distilbert_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English discord_twitter_distilbert DistilBertForSequenceClassification from windshield-viper +author: John Snow Labs +name: discord_twitter_distilbert +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`discord_twitter_distilbert` is a English model originally trained by windshield-viper. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/discord_twitter_distilbert_en_5.5.0_3.0_1725873504502.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/discord_twitter_distilbert_en_5.5.0_3.0_1725873504502.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("discord_twitter_distilbert","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("discord_twitter_distilbert", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|discord_twitter_distilbert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/windshield-viper/discord-twitter-distilbert \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_multilingual_cased_finetuned_cases_pipeline_xx.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_multilingual_cased_finetuned_cases_pipeline_xx.md new file mode 100644 index 00000000000000..67b784ac4223d3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_multilingual_cased_finetuned_cases_pipeline_xx.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Multilingual distilbert_base_multilingual_cased_finetuned_cases_pipeline pipeline DistilBertEmbeddings from MSG2002 +author: John Snow Labs +name: distilbert_base_multilingual_cased_finetuned_cases_pipeline +date: 2024-09-09 +tags: [xx, open_source, pipeline, onnx] +task: Embeddings +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_multilingual_cased_finetuned_cases_pipeline` is a Multilingual model originally trained by MSG2002. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_multilingual_cased_finetuned_cases_pipeline_xx_5.5.0_3.0_1725905732846.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_multilingual_cased_finetuned_cases_pipeline_xx_5.5.0_3.0_1725905732846.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_multilingual_cased_finetuned_cases_pipeline", lang = "xx") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_multilingual_cased_finetuned_cases_pipeline", lang = "xx") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_multilingual_cased_finetuned_cases_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|xx| +|Size:|505.4 MB| + +## References + +https://huggingface.co/MSG2002/distilbert-base-multilingual-cased-finetuned-cases + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_multilingual_cased_finetuned_cases_xx.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_multilingual_cased_finetuned_cases_xx.md new file mode 100644 index 00000000000000..1aa3bc9d299e1e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_multilingual_cased_finetuned_cases_xx.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Multilingual distilbert_base_multilingual_cased_finetuned_cases DistilBertEmbeddings from MSG2002 +author: John Snow Labs +name: distilbert_base_multilingual_cased_finetuned_cases +date: 2024-09-09 +tags: [xx, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_multilingual_cased_finetuned_cases` is a Multilingual model originally trained by MSG2002. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_multilingual_cased_finetuned_cases_xx_5.5.0_3.0_1725905708576.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_multilingual_cased_finetuned_cases_xx_5.5.0_3.0_1725905708576.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_multilingual_cased_finetuned_cases","xx") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_multilingual_cased_finetuned_cases","xx") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_multilingual_cased_finetuned_cases| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|xx| +|Size:|505.4 MB| + +## References + +https://huggingface.co/MSG2002/distilbert-base-multilingual-cased-finetuned-cases \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_customer_data_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_customer_data_pipeline_en.md new file mode 100644 index 00000000000000..db1a7ca2261c3a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_customer_data_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_customer_data_pipeline pipeline RoBertaForSequenceClassification from Bharatmalik1999 +author: John Snow Labs +name: distilbert_base_uncased_customer_data_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_customer_data_pipeline` is a English model originally trained by Bharatmalik1999. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_customer_data_pipeline_en_5.5.0_3.0_1725911791736.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_customer_data_pipeline_en_5.5.0_3.0_1725911791736.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_customer_data_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_customer_data_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_customer_data_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.3 MB| + +## References + +https://huggingface.co/Bharatmalik1999/distilbert-base-uncased-customer-data + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_distilbert_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_distilbert_pipeline_en.md new file mode 100644 index 00000000000000..3eb986ed742a6e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_distilbert_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_distilbert_pipeline pipeline DistilBertEmbeddings from distilbert +author: John Snow Labs +name: distilbert_base_uncased_distilbert_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_distilbert_pipeline` is a English model originally trained by distilbert. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_distilbert_pipeline_en_5.5.0_3.0_1725905512145.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_distilbert_pipeline_en_5.5.0_3.0_1725905512145.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_distilbert_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_distilbert_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_distilbert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/distilbert/distilbert-base-uncased + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_ag_news_v3_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_ag_news_v3_en.md new file mode 100644 index 00000000000000..b0077229fbb304 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_ag_news_v3_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_ag_news_v3 DistilBertEmbeddings from miggwp +author: John Snow Labs +name: distilbert_base_uncased_finetuned_ag_news_v3 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_ag_news_v3` is a English model originally trained by miggwp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ag_news_v3_en_5.5.0_3.0_1725921512605.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_ag_news_v3_en_5.5.0_3.0_1725921512605.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_ag_news_v3","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_ag_news_v3","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_ag_news_v3| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/miggwp/distilbert-base-uncased-finetuned-ag-news-v3 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_clickbait_detection_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_clickbait_detection_pipeline_en.md new file mode 100644 index 00000000000000..e5727215a58aab --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_clickbait_detection_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_clickbait_detection_pipeline pipeline DistilBertForQuestionAnswering from abdulmanaam +author: John Snow Labs +name: distilbert_base_uncased_finetuned_clickbait_detection_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_clickbait_detection_pipeline` is a English model originally trained by abdulmanaam. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_clickbait_detection_pipeline_en_5.5.0_3.0_1725877114312.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_clickbait_detection_pipeline_en_5.5.0_3.0_1725877114312.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_clickbait_detection_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_clickbait_detection_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_clickbait_detection_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/abdulmanaam/distilbert-base-uncased-finetuned-clickbait-detection + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_accelerate_elvijs_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_accelerate_elvijs_pipeline_en.md new file mode 100644 index 00000000000000..2b1872c1e76be8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_accelerate_elvijs_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_accelerate_elvijs_pipeline pipeline DistilBertEmbeddings from Elvijs +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_accelerate_elvijs_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_accelerate_elvijs_pipeline` is a English model originally trained by Elvijs. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_elvijs_pipeline_en_5.5.0_3.0_1725909430492.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_elvijs_pipeline_en_5.5.0_3.0_1725909430492.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_accelerate_elvijs_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_accelerate_elvijs_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_accelerate_elvijs_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Elvijs/distilbert-base-uncased-finetuned-imdb-accelerate + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_accelerate_rlenzen_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_accelerate_rlenzen_pipeline_en.md new file mode 100644 index 00000000000000..cc52db1f5f93c4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_accelerate_rlenzen_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_accelerate_rlenzen_pipeline pipeline DistilBertEmbeddings from rlenzen +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_accelerate_rlenzen_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_accelerate_rlenzen_pipeline` is a English model originally trained by rlenzen. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_rlenzen_pipeline_en_5.5.0_3.0_1725905511556.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_rlenzen_pipeline_en_5.5.0_3.0_1725905511556.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_accelerate_rlenzen_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_accelerate_rlenzen_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_accelerate_rlenzen_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/rlenzen/distilbert-base-uncased-finetuned-imdb-accelerate + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_accelerate_vaibhavtalekar87_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_accelerate_vaibhavtalekar87_en.md new file mode 100644 index 00000000000000..dfce332c0ed3c4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_accelerate_vaibhavtalekar87_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_accelerate_vaibhavtalekar87 DistilBertEmbeddings from vaibhavtalekar87 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_accelerate_vaibhavtalekar87 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_accelerate_vaibhavtalekar87` is a English model originally trained by vaibhavtalekar87. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_vaibhavtalekar87_en_5.5.0_3.0_1725909045936.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_vaibhavtalekar87_en_5.5.0_3.0_1725909045936.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_accelerate_vaibhavtalekar87","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_accelerate_vaibhavtalekar87","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_accelerate_vaibhavtalekar87| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/vaibhavtalekar87/distilbert-base-uncased-finetuned-imdb-accelerate \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_accelerate_yangwhale_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_accelerate_yangwhale_pipeline_en.md new file mode 100644 index 00000000000000..9a9b656c2fb7c0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_accelerate_yangwhale_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_accelerate_yangwhale_pipeline pipeline DistilBertEmbeddings from yangwhale +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_accelerate_yangwhale_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_accelerate_yangwhale_pipeline` is a English model originally trained by yangwhale. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_yangwhale_pipeline_en_5.5.0_3.0_1725867880298.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_yangwhale_pipeline_en_5.5.0_3.0_1725867880298.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_accelerate_yangwhale_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_accelerate_yangwhale_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_accelerate_yangwhale_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/yangwhale/distilbert-base-uncased-finetuned-imdb-accelerate + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_ashaduzzaman_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_ashaduzzaman_en.md new file mode 100644 index 00000000000000..8d5d90ca84b578 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_ashaduzzaman_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_ashaduzzaman DistilBertEmbeddings from ashaduzzaman +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_ashaduzzaman +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_ashaduzzaman` is a English model originally trained by ashaduzzaman. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_ashaduzzaman_en_5.5.0_3.0_1725905283367.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_ashaduzzaman_en_5.5.0_3.0_1725905283367.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_ashaduzzaman","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_ashaduzzaman","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_ashaduzzaman| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/ashaduzzaman/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_cheng_cherry_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_cheng_cherry_pipeline_en.md new file mode 100644 index 00000000000000..8bff10b99edb8f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_cheng_cherry_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_cheng_cherry_pipeline pipeline DistilBertEmbeddings from cheng-cherry +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_cheng_cherry_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_cheng_cherry_pipeline` is a English model originally trained by cheng-cherry. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_cheng_cherry_pipeline_en_5.5.0_3.0_1725905612592.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_cheng_cherry_pipeline_en_5.5.0_3.0_1725905612592.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_cheng_cherry_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_cheng_cherry_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_cheng_cherry_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/cheng-cherry/distilbert-base-uncased-finetuned-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_copypaste_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_copypaste_pipeline_en.md new file mode 100644 index 00000000000000..3dc2f815b3dfc9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_copypaste_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_copypaste_pipeline pipeline DistilBertEmbeddings from CopyPaste +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_copypaste_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_copypaste_pipeline` is a English model originally trained by CopyPaste. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_copypaste_pipeline_en_5.5.0_3.0_1725905437686.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_copypaste_pipeline_en_5.5.0_3.0_1725905437686.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_copypaste_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_copypaste_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_copypaste_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/CopyPaste/distilbert-base-uncased-finetuned-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_eryuefei_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_eryuefei_en.md new file mode 100644 index 00000000000000..0630a17f9538a5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_eryuefei_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_eryuefei DistilBertEmbeddings from eryuefei +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_eryuefei +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_eryuefei` is a English model originally trained by eryuefei. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_eryuefei_en_5.5.0_3.0_1725921722986.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_eryuefei_en_5.5.0_3.0_1725921722986.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_eryuefei","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_eryuefei","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_eryuefei| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/eryuefei/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_hachiiiii_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_hachiiiii_en.md new file mode 100644 index 00000000000000..00608284d5527d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_hachiiiii_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_hachiiiii DistilBertEmbeddings from hachiiiii +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_hachiiiii +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_hachiiiii` is a English model originally trained by hachiiiii. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_hachiiiii_en_5.5.0_3.0_1725921699853.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_hachiiiii_en_5.5.0_3.0_1725921699853.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_hachiiiii","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_hachiiiii","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_hachiiiii| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/hachiiiii/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_hardikcode_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_hardikcode_en.md new file mode 100644 index 00000000000000..315c34cce68f13 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_hardikcode_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_hardikcode DistilBertEmbeddings from hardikcode +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_hardikcode +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_hardikcode` is a English model originally trained by hardikcode. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_hardikcode_en_5.5.0_3.0_1725921235007.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_hardikcode_en_5.5.0_3.0_1725921235007.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_hardikcode","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_hardikcode","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_hardikcode| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/hardikcode/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_kayabaakihiko_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_kayabaakihiko_pipeline_en.md new file mode 100644 index 00000000000000..8e4a2a6b9a97c5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_kayabaakihiko_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_kayabaakihiko_pipeline pipeline DistilBertEmbeddings from kayabaAkihiko +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_kayabaakihiko_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_kayabaakihiko_pipeline` is a English model originally trained by kayabaAkihiko. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_kayabaakihiko_pipeline_en_5.5.0_3.0_1725909462988.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_kayabaakihiko_pipeline_en_5.5.0_3.0_1725909462988.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_kayabaakihiko_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_kayabaakihiko_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_kayabaakihiko_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/kayabaAkihiko/distilbert-base-uncased-finetuned-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_kumarme072_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_kumarme072_en.md new file mode 100644 index 00000000000000..07aa8a22f1bd93 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_kumarme072_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_kumarme072 DistilBertEmbeddings from kumarme072 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_kumarme072 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_kumarme072` is a English model originally trained by kumarme072. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_kumarme072_en_5.5.0_3.0_1725905283861.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_kumarme072_en_5.5.0_3.0_1725905283861.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_kumarme072","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_kumarme072","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_kumarme072| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/kumarme072/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_loganathanspr_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_loganathanspr_en.md new file mode 100644 index 00000000000000..66a22b18dcbf64 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_loganathanspr_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_loganathanspr DistilBertEmbeddings from loganathanspr +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_loganathanspr +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_loganathanspr` is a English model originally trained by loganathanspr. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_loganathanspr_en_5.5.0_3.0_1725868419922.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_loganathanspr_en_5.5.0_3.0_1725868419922.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_loganathanspr","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_loganathanspr","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_loganathanspr| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/loganathanspr/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_rohitdiwane_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_rohitdiwane_en.md new file mode 100644 index 00000000000000..c6454b933f5bbf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_rohitdiwane_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_rohitdiwane DistilBertEmbeddings from rohitdiwane +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_rohitdiwane +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_rohitdiwane` is a English model originally trained by rohitdiwane. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_rohitdiwane_en_5.5.0_3.0_1725867877261.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_rohitdiwane_en_5.5.0_3.0_1725867877261.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_rohitdiwane","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_rohitdiwane","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_rohitdiwane| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/rohitdiwane/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_trainer_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_trainer_en.md new file mode 100644 index 00000000000000..768383e1a8c360 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_trainer_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_trainer DistilBertEmbeddings from neko52 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_trainer +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_trainer` is a English model originally trained by neko52. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_trainer_en_5.5.0_3.0_1725905745662.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_trainer_en_5.5.0_3.0_1725905745662.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_trainer","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_trainer","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_trainer| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/neko52/distilbert-base-uncased-finetuned-imdb-trainer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_victorbarra_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_victorbarra_en.md new file mode 100644 index 00000000000000..53c4c65a00b597 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_victorbarra_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_victorbarra DistilBertEmbeddings from victorbarra +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_victorbarra +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_victorbarra` is a English model originally trained by victorbarra. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_victorbarra_en_5.5.0_3.0_1725905575827.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_victorbarra_en_5.5.0_3.0_1725905575827.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_victorbarra","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_victorbarra","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_victorbarra| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/victorbarra/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_vonewman_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_vonewman_en.md new file mode 100644 index 00000000000000..0c245de39d94c7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_imdb_vonewman_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_vonewman DistilBertEmbeddings from vonewman +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_vonewman +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_vonewman` is a English model originally trained by vonewman. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_vonewman_en_5.5.0_3.0_1725905867564.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_vonewman_en_5.5.0_3.0_1725905867564.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_vonewman","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_vonewman","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_vonewman| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/vonewman/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_mp_unannotated_half_frozen_v1_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_mp_unannotated_half_frozen_v1_en.md new file mode 100644 index 00000000000000..bba78c0d69c6bd --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_mp_unannotated_half_frozen_v1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_mp_unannotated_half_frozen_v1 DistilBertEmbeddings from kghanlon +author: John Snow Labs +name: distilbert_base_uncased_finetuned_mp_unannotated_half_frozen_v1 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_mp_unannotated_half_frozen_v1` is a English model originally trained by kghanlon. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_mp_unannotated_half_frozen_v1_en_5.5.0_3.0_1725921234928.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_mp_unannotated_half_frozen_v1_en_5.5.0_3.0_1725921234928.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_mp_unannotated_half_frozen_v1","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_mp_unannotated_half_frozen_v1","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_mp_unannotated_half_frozen_v1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/kghanlon/distilbert-base-uncased-finetuned-MP-unannotated-half-frozen-v1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_pubmed_torch_trained_tabbas97_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_pubmed_torch_trained_tabbas97_en.md new file mode 100644 index 00000000000000..10990c7e110e3d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_pubmed_torch_trained_tabbas97_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_pubmed_torch_trained_tabbas97 DistilBertEmbeddings from tabbas97 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_pubmed_torch_trained_tabbas97 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_pubmed_torch_trained_tabbas97` is a English model originally trained by tabbas97. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_pubmed_torch_trained_tabbas97_en_5.5.0_3.0_1725905283930.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_pubmed_torch_trained_tabbas97_en_5.5.0_3.0_1725905283930.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_pubmed_torch_trained_tabbas97","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_pubmed_torch_trained_tabbas97","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_pubmed_torch_trained_tabbas97| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/tabbas97/distilbert-base-uncased-finetuned-pubmed-torch-trained-tabbas97 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_spam_ham_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_spam_ham_pipeline_en.md new file mode 100644 index 00000000000000..6399ca0dade145 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_spam_ham_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_spam_ham_pipeline pipeline DistilBertEmbeddings from fbi0826 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_spam_ham_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_spam_ham_pipeline` is a English model originally trained by fbi0826. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_spam_ham_pipeline_en_5.5.0_3.0_1725921307921.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_spam_ham_pipeline_en_5.5.0_3.0_1725921307921.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_spam_ham_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_spam_ham_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_spam_ham_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/fbi0826/distilbert-base-uncased-finetuned-spam-ham + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad2_ep4_batch16_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad2_ep4_batch16_pipeline_en.md new file mode 100644 index 00000000000000..9636d587ca51c4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad2_ep4_batch16_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad2_ep4_batch16_pipeline pipeline DistilBertForQuestionAnswering from wieheistdu +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad2_ep4_batch16_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad2_ep4_batch16_pipeline` is a English model originally trained by wieheistdu. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad2_ep4_batch16_pipeline_en_5.5.0_3.0_1725892540470.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad2_ep4_batch16_pipeline_en_5.5.0_3.0_1725892540470.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_squad2_ep4_batch16_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_squad2_ep4_batch16_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad2_ep4_batch16_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/wieheistdu/distilbert-base-uncased-finetuned-squad2-ep4-batch16 + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad_mtcs34_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad_mtcs34_en.md new file mode 100644 index 00000000000000..1bce337a194d64 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad_mtcs34_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_mtcs34 DistilBertForQuestionAnswering from MTCS34 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_mtcs34 +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_mtcs34` is a English model originally trained by MTCS34. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_mtcs34_en_5.5.0_3.0_1725892496253.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_mtcs34_en_5.5.0_3.0_1725892496253.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_mtcs34","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_mtcs34", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_mtcs34| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/MTCS34/distilbert-base-uncased-finetuned-squad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad_narayana2222_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad_narayana2222_en.md new file mode 100644 index 00000000000000..71a9fa93c3c439 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad_narayana2222_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_narayana2222 DistilBertForQuestionAnswering from Narayana2222 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_narayana2222 +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_narayana2222` is a English model originally trained by Narayana2222. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_narayana2222_en_5.5.0_3.0_1725877342543.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_narayana2222_en_5.5.0_3.0_1725877342543.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_narayana2222","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_narayana2222", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_narayana2222| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Narayana2222/distilbert-base-uncased-finetuned-squad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad_ngchuchi_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad_ngchuchi_en.md new file mode 100644 index 00000000000000..37c45ff74a5914 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad_ngchuchi_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_ngchuchi DistilBertForQuestionAnswering from ngchuchi +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_ngchuchi +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_ngchuchi` is a English model originally trained by ngchuchi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_ngchuchi_en_5.5.0_3.0_1725892793555.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_ngchuchi_en_5.5.0_3.0_1725892793555.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_ngchuchi","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_ngchuchi", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_ngchuchi| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/ngchuchi/distilbert-base-uncased-finetuned-squad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad_shahwali_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad_shahwali_pipeline_en.md new file mode 100644 index 00000000000000..f3019802fca3a4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_finetuned_squad_shahwali_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_shahwali_pipeline pipeline DistilBertForQuestionAnswering from ShahWali +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_shahwali_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_shahwali_pipeline` is a English model originally trained by ShahWali. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_shahwali_pipeline_en_5.5.0_3.0_1725892727034.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_shahwali_pipeline_en_5.5.0_3.0_1725892727034.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_squad_shahwali_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_squad_shahwali_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_shahwali_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/ShahWali/distilbert-base-uncased-finetuned-squad + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_pytorch_wnut_17_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_pytorch_wnut_17_en.md new file mode 100644 index 00000000000000..fff93ddbbe2d86 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_base_uncased_pytorch_wnut_17_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_pytorch_wnut_17 DistilBertForTokenClassification from adarsh2350 +author: John Snow Labs +name: distilbert_base_uncased_pytorch_wnut_17 +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_pytorch_wnut_17` is a English model originally trained by adarsh2350. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_pytorch_wnut_17_en_5.5.0_3.0_1725890064772.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_pytorch_wnut_17_en_5.5.0_3.0_1725890064772.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_pytorch_wnut_17","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("distilbert_base_uncased_pytorch_wnut_17", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_pytorch_wnut_17| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/adarsh2350/distilbert-base-uncased-pytorch-wnut-17 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_esg_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_esg_en.md new file mode 100644 index 00000000000000..c5ba56cef63c93 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_esg_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_esg DistilBertForSequenceClassification from descartes100 +author: John Snow Labs +name: distilbert_esg +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_esg` is a English model originally trained by descartes100. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_esg_en_5.5.0_3.0_1725873070281.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_esg_en_5.5.0_3.0_1725873070281.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_esg","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_esg", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_esg| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/descartes100/distilBERT_ESG \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_extractive_qa_large_project_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_extractive_qa_large_project_en.md new file mode 100644 index 00000000000000..e253dd58f49ea1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_extractive_qa_large_project_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distilbert_extractive_qa_large_project BertForQuestionAnswering from amara16 +author: John Snow Labs +name: distilbert_extractive_qa_large_project +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, bert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_extractive_qa_large_project` is a English model originally trained by amara16. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_extractive_qa_large_project_en_5.5.0_3.0_1725885956667.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_extractive_qa_large_project_en_5.5.0_3.0_1725885956667.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = BertForQuestionAnswering.pretrained("distilbert_extractive_qa_large_project","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = BertForQuestionAnswering.pretrained("distilbert_extractive_qa_large_project", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_extractive_qa_large_project| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|1.2 GB| + +## References + +https://huggingface.co/amara16/distilbert-extractive-qa-large-project \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_extractive_qa_large_project_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_extractive_qa_large_project_pipeline_en.md new file mode 100644 index 00000000000000..e27f130e1e857b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_extractive_qa_large_project_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_extractive_qa_large_project_pipeline pipeline BertForQuestionAnswering from amara16 +author: John Snow Labs +name: distilbert_extractive_qa_large_project_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_extractive_qa_large_project_pipeline` is a English model originally trained by amara16. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_extractive_qa_large_project_pipeline_en_5.5.0_3.0_1725886014904.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_extractive_qa_large_project_pipeline_en_5.5.0_3.0_1725886014904.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_extractive_qa_large_project_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_extractive_qa_large_project_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_extractive_qa_large_project_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.2 GB| + +## References + +https://huggingface.co/amara16/distilbert-extractive-qa-large-project + +## Included Models + +- MultiDocumentAssembler +- BertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_finetuned_squadv2_leeduc123_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_finetuned_squadv2_leeduc123_en.md new file mode 100644 index 00000000000000..391c2adae58949 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_finetuned_squadv2_leeduc123_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distilbert_finetuned_squadv2_leeduc123 DistilBertForQuestionAnswering from leeduc123 +author: John Snow Labs +name: distilbert_finetuned_squadv2_leeduc123 +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_finetuned_squadv2_leeduc123` is a English model originally trained by leeduc123. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_finetuned_squadv2_leeduc123_en_5.5.0_3.0_1725892793736.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_finetuned_squadv2_leeduc123_en_5.5.0_3.0_1725892793736.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_finetuned_squadv2_leeduc123","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_finetuned_squadv2_leeduc123", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_finetuned_squadv2_leeduc123| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/leeduc123/distilbert-finetuned-squadv2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_static_malware_detection_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_static_malware_detection_en.md new file mode 100644 index 00000000000000..6eb4c89a9d483f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_static_malware_detection_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_static_malware_detection DistilBertForSequenceClassification from sibumi +author: John Snow Labs +name: distilbert_static_malware_detection +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_static_malware_detection` is a English model originally trained by sibumi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_static_malware_detection_en_5.5.0_3.0_1725872961643.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_static_malware_detection_en_5.5.0_3.0_1725872961643.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_static_malware_detection","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_static_malware_detection", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_static_malware_detection| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/sibumi/DISTILBERT_static_malware-detection \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilbert_tokenizer_256k_mlm_1m_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilbert_tokenizer_256k_mlm_1m_pipeline_en.md new file mode 100644 index 00000000000000..7e8047429a865d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilbert_tokenizer_256k_mlm_1m_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_tokenizer_256k_mlm_1m_pipeline pipeline DistilBertEmbeddings from vocab-transformers +author: John Snow Labs +name: distilbert_tokenizer_256k_mlm_1m_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_tokenizer_256k_mlm_1m_pipeline` is a English model originally trained by vocab-transformers. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_tokenizer_256k_mlm_1m_pipeline_en_5.5.0_3.0_1725909323286.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_tokenizer_256k_mlm_1m_pipeline_en_5.5.0_3.0_1725909323286.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_tokenizer_256k_mlm_1m_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_tokenizer_256k_mlm_1m_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_tokenizer_256k_mlm_1m_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|882.5 MB| + +## References + +https://huggingface.co/vocab-transformers/distilbert-tokenizer_256k-MLM_1M + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distillbert_base_spanish_uncased_finetuned_with_llama2_knowledge_distillation_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distillbert_base_spanish_uncased_finetuned_with_llama2_knowledge_distillation_pipeline_en.md new file mode 100644 index 00000000000000..1680a839182db3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distillbert_base_spanish_uncased_finetuned_with_llama2_knowledge_distillation_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distillbert_base_spanish_uncased_finetuned_with_llama2_knowledge_distillation_pipeline pipeline DistilBertEmbeddings from tatakof +author: John Snow Labs +name: distillbert_base_spanish_uncased_finetuned_with_llama2_knowledge_distillation_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distillbert_base_spanish_uncased_finetuned_with_llama2_knowledge_distillation_pipeline` is a English model originally trained by tatakof. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distillbert_base_spanish_uncased_finetuned_with_llama2_knowledge_distillation_pipeline_en_5.5.0_3.0_1725921337599.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distillbert_base_spanish_uncased_finetuned_with_llama2_knowledge_distillation_pipeline_en_5.5.0_3.0_1725921337599.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distillbert_base_spanish_uncased_finetuned_with_llama2_knowledge_distillation_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distillbert_base_spanish_uncased_finetuned_with_llama2_knowledge_distillation_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distillbert_base_spanish_uncased_finetuned_with_llama2_knowledge_distillation_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|250.2 MB| + +## References + +https://huggingface.co/tatakof/distillbert-base-spanish-uncased_finetuned_with-Llama2-Knowledge-Distillation + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilledbert_hatespeech_pretrain_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilledbert_hatespeech_pretrain_en.md new file mode 100644 index 00000000000000..6178a78304af0b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilledbert_hatespeech_pretrain_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilledbert_hatespeech_pretrain DistilBertEmbeddings from agvidit1 +author: John Snow Labs +name: distilledbert_hatespeech_pretrain +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilledbert_hatespeech_pretrain` is a English model originally trained by agvidit1. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilledbert_hatespeech_pretrain_en_5.5.0_3.0_1725905704338.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilledbert_hatespeech_pretrain_en_5.5.0_3.0_1725905704338.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilledbert_hatespeech_pretrain","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilledbert_hatespeech_pretrain","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilledbert_hatespeech_pretrain| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/agvidit1/DistilledBert_HateSpeech_pretrain \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilroberta_base_massive_intent_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilroberta_base_massive_intent_pipeline_en.md new file mode 100644 index 00000000000000..94887598e97632 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilroberta_base_massive_intent_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilroberta_base_massive_intent_pipeline pipeline RoBertaForSequenceClassification from gokuls +author: John Snow Labs +name: distilroberta_base_massive_intent_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilroberta_base_massive_intent_pipeline` is a English model originally trained by gokuls. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilroberta_base_massive_intent_pipeline_en_5.5.0_3.0_1725920311664.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilroberta_base_massive_intent_pipeline_en_5.5.0_3.0_1725920311664.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilroberta_base_massive_intent_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilroberta_base_massive_intent_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilroberta_base_massive_intent_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|308.7 MB| + +## References + +https://huggingface.co/gokuls/distilroberta-base-Massive-intent + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilroberta_base_mrpc_glue_oscar_salas4_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilroberta_base_mrpc_glue_oscar_salas4_en.md new file mode 100644 index 00000000000000..7a8708af507ca5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilroberta_base_mrpc_glue_oscar_salas4_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilroberta_base_mrpc_glue_oscar_salas4 RoBertaForSequenceClassification from salascorp +author: John Snow Labs +name: distilroberta_base_mrpc_glue_oscar_salas4 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilroberta_base_mrpc_glue_oscar_salas4` is a English model originally trained by salascorp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilroberta_base_mrpc_glue_oscar_salas4_en_5.5.0_3.0_1725911852859.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilroberta_base_mrpc_glue_oscar_salas4_en_5.5.0_3.0_1725911852859.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("distilroberta_base_mrpc_glue_oscar_salas4","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("distilroberta_base_mrpc_glue_oscar_salas4", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilroberta_base_mrpc_glue_oscar_salas4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|308.6 MB| + +## References + +https://huggingface.co/salascorp/distilroberta-base-mrpc-glue-oscar-salas4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilroberta_base_twitter_16m_aug_oct22_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilroberta_base_twitter_16m_aug_oct22_pipeline_en.md new file mode 100644 index 00000000000000..126ac38aa59727 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilroberta_base_twitter_16m_aug_oct22_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilroberta_base_twitter_16m_aug_oct22_pipeline pipeline RoBertaEmbeddings from g8a9 +author: John Snow Labs +name: distilroberta_base_twitter_16m_aug_oct22_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilroberta_base_twitter_16m_aug_oct22_pipeline` is a English model originally trained by g8a9. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilroberta_base_twitter_16m_aug_oct22_pipeline_en_5.5.0_3.0_1725909890646.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilroberta_base_twitter_16m_aug_oct22_pipeline_en_5.5.0_3.0_1725909890646.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilroberta_base_twitter_16m_aug_oct22_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilroberta_base_twitter_16m_aug_oct22_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilroberta_base_twitter_16m_aug_oct22_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|306.6 MB| + +## References + +https://huggingface.co/g8a9/distilroberta-base-twitter-16M_aug-oct22 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilroberta_financial_sentiment_model_5000_samples_fine_tune_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilroberta_financial_sentiment_model_5000_samples_fine_tune_en.md new file mode 100644 index 00000000000000..e959f23d8b16a0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilroberta_financial_sentiment_model_5000_samples_fine_tune_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilroberta_financial_sentiment_model_5000_samples_fine_tune RoBertaForSequenceClassification from kevinwlip +author: John Snow Labs +name: distilroberta_financial_sentiment_model_5000_samples_fine_tune +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilroberta_financial_sentiment_model_5000_samples_fine_tune` is a English model originally trained by kevinwlip. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilroberta_financial_sentiment_model_5000_samples_fine_tune_en_5.5.0_3.0_1725904877027.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilroberta_financial_sentiment_model_5000_samples_fine_tune_en_5.5.0_3.0_1725904877027.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("distilroberta_financial_sentiment_model_5000_samples_fine_tune","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("distilroberta_financial_sentiment_model_5000_samples_fine_tune", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilroberta_financial_sentiment_model_5000_samples_fine_tune| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|308.7 MB| + +## References + +https://huggingface.co/kevinwlip/distilroberta-financial-sentiment-model-5000-samples-fine-tune \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-distilroberta_financial_sentiment_model_5000_samples_fine_tune_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-distilroberta_financial_sentiment_model_5000_samples_fine_tune_pipeline_en.md new file mode 100644 index 00000000000000..173dfdd018f12d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-distilroberta_financial_sentiment_model_5000_samples_fine_tune_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilroberta_financial_sentiment_model_5000_samples_fine_tune_pipeline pipeline RoBertaForSequenceClassification from kevinwlip +author: John Snow Labs +name: distilroberta_financial_sentiment_model_5000_samples_fine_tune_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilroberta_financial_sentiment_model_5000_samples_fine_tune_pipeline` is a English model originally trained by kevinwlip. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilroberta_financial_sentiment_model_5000_samples_fine_tune_pipeline_en_5.5.0_3.0_1725904893707.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilroberta_financial_sentiment_model_5000_samples_fine_tune_pipeline_en_5.5.0_3.0_1725904893707.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilroberta_financial_sentiment_model_5000_samples_fine_tune_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilroberta_financial_sentiment_model_5000_samples_fine_tune_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilroberta_financial_sentiment_model_5000_samples_fine_tune_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|308.8 MB| + +## References + +https://huggingface.co/kevinwlip/distilroberta-financial-sentiment-model-5000-samples-fine-tune + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dl_a3_q3_results_en.md b/docs/_posts/ahmedlone127/2024-09-09-dl_a3_q3_results_en.md new file mode 100644 index 00000000000000..c2451c2d3724e7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dl_a3_q3_results_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dl_a3_q3_results MarianTransformer from nks18 +author: John Snow Labs +name: dl_a3_q3_results +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dl_a3_q3_results` is a English model originally trained by nks18. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dl_a3_q3_results_en_5.5.0_3.0_1725864013470.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dl_a3_q3_results_en_5.5.0_3.0_1725864013470.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("dl_a3_q3_results","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("dl_a3_q3_results","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dl_a3_q3_results| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.4 MB| + +## References + +https://huggingface.co/nks18/dl_a3_q3_results \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dock_1_en.md b/docs/_posts/ahmedlone127/2024-09-09-dock_1_en.md new file mode 100644 index 00000000000000..ee0548c93e1cab --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dock_1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dock_1 RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: dock_1 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dock_1` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dock_1_en_5.5.0_3.0_1725903357949.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dock_1_en_5.5.0_3.0_1725903357949.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("dock_1","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("dock_1", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dock_1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Dock_1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dock_1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dock_1_pipeline_en.md new file mode 100644 index 00000000000000..671615fc714a37 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dock_1_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dock_1_pipeline pipeline RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: dock_1_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dock_1_pipeline` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dock_1_pipeline_en_5.5.0_3.0_1725903387688.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dock_1_pipeline_en_5.5.0_3.0_1725903387688.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dock_1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dock_1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dock_1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Dock_1 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dopamin_java_ownership_en.md b/docs/_posts/ahmedlone127/2024-09-09-dopamin_java_ownership_en.md new file mode 100644 index 00000000000000..38d514eab8efb2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dopamin_java_ownership_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dopamin_java_ownership RoBertaForSequenceClassification from Fsoft-AIC +author: John Snow Labs +name: dopamin_java_ownership +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dopamin_java_ownership` is a English model originally trained by Fsoft-AIC. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dopamin_java_ownership_en_5.5.0_3.0_1725911291867.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dopamin_java_ownership_en_5.5.0_3.0_1725911291867.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("dopamin_java_ownership","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("dopamin_java_ownership", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dopamin_java_ownership| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.3 MB| + +## References + +https://huggingface.co/Fsoft-AIC/dopamin-java-ownership \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_joelpastre_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_joelpastre_en.md new file mode 100644 index 00000000000000..e45cf90321f9c8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_joelpastre_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_joelpastre CamemBertEmbeddings from joelpastre +author: John Snow Labs +name: dummy_joelpastre +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_joelpastre` is a English model originally trained by joelpastre. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_joelpastre_en_5.5.0_3.0_1725897889261.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_joelpastre_en_5.5.0_3.0_1725897889261.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_joelpastre","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_joelpastre","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_joelpastre| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/joelpastre/dummy \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_joelpastre_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_joelpastre_pipeline_en.md new file mode 100644 index 00000000000000..d22d5784f3c64d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_joelpastre_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_joelpastre_pipeline pipeline CamemBertEmbeddings from joelpastre +author: John Snow Labs +name: dummy_joelpastre_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_joelpastre_pipeline` is a English model originally trained by joelpastre. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_joelpastre_pipeline_en_5.5.0_3.0_1725897965540.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_joelpastre_pipeline_en_5.5.0_3.0_1725897965540.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_joelpastre_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_joelpastre_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_joelpastre_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/joelpastre/dummy + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model2_appletreeleaf_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model2_appletreeleaf_en.md new file mode 100644 index 00000000000000..b706153de82658 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model2_appletreeleaf_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model2_appletreeleaf CamemBertEmbeddings from appletreeleaf +author: John Snow Labs +name: dummy_model2_appletreeleaf +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model2_appletreeleaf` is a English model originally trained by appletreeleaf. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model2_appletreeleaf_en_5.5.0_3.0_1725898156637.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model2_appletreeleaf_en_5.5.0_3.0_1725898156637.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model2_appletreeleaf","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model2_appletreeleaf","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model2_appletreeleaf| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/appletreeleaf/dummy-model2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_aiacademy131_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_aiacademy131_pipeline_en.md new file mode 100644 index 00000000000000..3f4e79f4179b8e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_aiacademy131_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_aiacademy131_pipeline pipeline CamemBertEmbeddings from aiacademy131 +author: John Snow Labs +name: dummy_model_aiacademy131_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_aiacademy131_pipeline` is a English model originally trained by aiacademy131. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_aiacademy131_pipeline_en_5.5.0_3.0_1725898263044.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_aiacademy131_pipeline_en_5.5.0_3.0_1725898263044.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_aiacademy131_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_aiacademy131_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_aiacademy131_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/aiacademy131/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_greyfoss_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_greyfoss_pipeline_en.md new file mode 100644 index 00000000000000..b454b08974241c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_greyfoss_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_greyfoss_pipeline pipeline CamemBertEmbeddings from greyfoss +author: John Snow Labs +name: dummy_model_greyfoss_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_greyfoss_pipeline` is a English model originally trained by greyfoss. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_greyfoss_pipeline_en_5.5.0_3.0_1725898353584.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_greyfoss_pipeline_en_5.5.0_3.0_1725898353584.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_greyfoss_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_greyfoss_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_greyfoss_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/greyfoss/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_henrywang_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_henrywang_en.md new file mode 100644 index 00000000000000..2c8c1089e3305e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_henrywang_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_henrywang CamemBertEmbeddings from Henrywang +author: John Snow Labs +name: dummy_model_henrywang +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_henrywang` is a English model originally trained by Henrywang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_henrywang_en_5.5.0_3.0_1725898694573.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_henrywang_en_5.5.0_3.0_1725898694573.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_henrywang","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_henrywang","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_henrywang| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/Henrywang/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_henrywang_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_henrywang_pipeline_en.md new file mode 100644 index 00000000000000..09eaa717cf5a58 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_henrywang_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_henrywang_pipeline pipeline CamemBertEmbeddings from Henrywang +author: John Snow Labs +name: dummy_model_henrywang_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_henrywang_pipeline` is a English model originally trained by Henrywang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_henrywang_pipeline_en_5.5.0_3.0_1725898770384.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_henrywang_pipeline_en_5.5.0_3.0_1725898770384.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_henrywang_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_henrywang_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_henrywang_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/Henrywang/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_ishan1423_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_ishan1423_en.md new file mode 100644 index 00000000000000..d527504fb56591 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_ishan1423_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_ishan1423 CamemBertEmbeddings from Ishan1423 +author: John Snow Labs +name: dummy_model_ishan1423 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_ishan1423` is a English model originally trained by Ishan1423. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_ishan1423_en_5.5.0_3.0_1725898399165.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_ishan1423_en_5.5.0_3.0_1725898399165.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_ishan1423","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_ishan1423","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_ishan1423| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/Ishan1423/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_jcr987_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_jcr987_en.md new file mode 100644 index 00000000000000..fe3d59839d162a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_jcr987_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_jcr987 CamemBertEmbeddings from jcr987 +author: John Snow Labs +name: dummy_model_jcr987 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_jcr987` is a English model originally trained by jcr987. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_jcr987_en_5.5.0_3.0_1725897748830.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_jcr987_en_5.5.0_3.0_1725897748830.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_jcr987","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_jcr987","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_jcr987| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/jcr987/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_jdang_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_jdang_en.md new file mode 100644 index 00000000000000..ab488180ac1e64 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_jdang_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_jdang CamemBertEmbeddings from jdang +author: John Snow Labs +name: dummy_model_jdang +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_jdang` is a English model originally trained by jdang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_jdang_en_5.5.0_3.0_1725898595409.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_jdang_en_5.5.0_3.0_1725898595409.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_jdang","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_jdang","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_jdang| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/jdang/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_jdang_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_jdang_pipeline_en.md new file mode 100644 index 00000000000000..f091247060e9f4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_jdang_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_jdang_pipeline pipeline CamemBertEmbeddings from jdang +author: John Snow Labs +name: dummy_model_jdang_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_jdang_pipeline` is a English model originally trained by jdang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_jdang_pipeline_en_5.5.0_3.0_1725898670184.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_jdang_pipeline_en_5.5.0_3.0_1725898670184.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_jdang_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_jdang_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_jdang_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/jdang/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_krishaaan_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_krishaaan_pipeline_en.md new file mode 100644 index 00000000000000..3752a1a905b081 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_krishaaan_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_krishaaan_pipeline pipeline CamemBertEmbeddings from Krishaaan +author: John Snow Labs +name: dummy_model_krishaaan_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_krishaaan_pipeline` is a English model originally trained by Krishaaan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_krishaaan_pipeline_en_5.5.0_3.0_1725898044568.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_krishaaan_pipeline_en_5.5.0_3.0_1725898044568.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_krishaaan_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_krishaaan_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_krishaaan_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/Krishaaan/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_marcsun13_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_marcsun13_pipeline_en.md new file mode 100644 index 00000000000000..40cad60632bfc6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_marcsun13_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_marcsun13_pipeline pipeline CamemBertEmbeddings from marcsun13 +author: John Snow Labs +name: dummy_model_marcsun13_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_marcsun13_pipeline` is a English model originally trained by marcsun13. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_marcsun13_pipeline_en_5.5.0_3.0_1725851294704.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_marcsun13_pipeline_en_5.5.0_3.0_1725851294704.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_marcsun13_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_marcsun13_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_marcsun13_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/marcsun13/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_maunei_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_maunei_en.md new file mode 100644 index 00000000000000..637d5fc69d907f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_maunei_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_maunei CamemBertEmbeddings from maunei +author: John Snow Labs +name: dummy_model_maunei +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_maunei` is a English model originally trained by maunei. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_maunei_en_5.5.0_3.0_1725897817553.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_maunei_en_5.5.0_3.0_1725897817553.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_maunei","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_maunei","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_maunei| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/maunei/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_mayank1999_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_mayank1999_pipeline_en.md new file mode 100644 index 00000000000000..7ccf254df7c71d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_mayank1999_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_mayank1999_pipeline pipeline CamemBertEmbeddings from Mayank1999 +author: John Snow Labs +name: dummy_model_mayank1999_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_mayank1999_pipeline` is a English model originally trained by Mayank1999. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_mayank1999_pipeline_en_5.5.0_3.0_1725898215339.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_mayank1999_pipeline_en_5.5.0_3.0_1725898215339.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_mayank1999_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_mayank1999_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_mayank1999_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/Mayank1999/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_modeltokenizerpushtohub_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_modeltokenizerpushtohub_en.md new file mode 100644 index 00000000000000..f2ee40c34ab94a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_modeltokenizerpushtohub_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_modeltokenizerpushtohub CamemBertEmbeddings from sara98 +author: John Snow Labs +name: dummy_model_modeltokenizerpushtohub +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_modeltokenizerpushtohub` is a English model originally trained by sara98. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_modeltokenizerpushtohub_en_5.5.0_3.0_1725851667730.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_modeltokenizerpushtohub_en_5.5.0_3.0_1725851667730.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_modeltokenizerpushtohub","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_modeltokenizerpushtohub","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_modeltokenizerpushtohub| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/sara98/dummy-model-modeltokenizerpushtohub \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_rarabang_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_rarabang_en.md new file mode 100644 index 00000000000000..2fcb401611d66d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_rarabang_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_rarabang CamemBertEmbeddings from rarabang +author: John Snow Labs +name: dummy_model_rarabang +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_rarabang` is a English model originally trained by rarabang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_rarabang_en_5.5.0_3.0_1725897828296.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_rarabang_en_5.5.0_3.0_1725897828296.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_rarabang","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_rarabang","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_rarabang| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/rarabang/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_rarabang_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_rarabang_pipeline_en.md new file mode 100644 index 00000000000000..1f65e9e5917495 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_rarabang_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_rarabang_pipeline pipeline CamemBertEmbeddings from rarabang +author: John Snow Labs +name: dummy_model_rarabang_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_rarabang_pipeline` is a English model originally trained by rarabang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_rarabang_pipeline_en_5.5.0_3.0_1725897905083.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_rarabang_pipeline_en_5.5.0_3.0_1725897905083.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_rarabang_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_rarabang_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_rarabang_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/rarabang/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_scotssman_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_scotssman_en.md new file mode 100644 index 00000000000000..d7e20c301628d6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_scotssman_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_scotssman CamemBertEmbeddings from scotssman +author: John Snow Labs +name: dummy_model_scotssman +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_scotssman` is a English model originally trained by scotssman. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_scotssman_en_5.5.0_3.0_1725898180186.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_scotssman_en_5.5.0_3.0_1725898180186.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_scotssman","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_scotssman","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_scotssman| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/scotssman/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_seyfullah_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_seyfullah_pipeline_en.md new file mode 100644 index 00000000000000..444ae4fbabd118 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_seyfullah_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_seyfullah_pipeline pipeline CamemBertEmbeddings from seyfullah +author: John Snow Labs +name: dummy_model_seyfullah_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_seyfullah_pipeline` is a English model originally trained by seyfullah. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_seyfullah_pipeline_en_5.5.0_3.0_1725898128177.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_seyfullah_pipeline_en_5.5.0_3.0_1725898128177.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_seyfullah_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_seyfullah_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_seyfullah_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/seyfullah/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_terps_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_terps_en.md new file mode 100644 index 00000000000000..529786bc507f56 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_terps_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_terps CamemBertEmbeddings from Terps +author: John Snow Labs +name: dummy_model_terps +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_terps` is a English model originally trained by Terps. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_terps_en_5.5.0_3.0_1725852016190.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_terps_en_5.5.0_3.0_1725852016190.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_terps","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_terps","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_terps| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/Terps/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_vincedovy_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_vincedovy_en.md new file mode 100644 index 00000000000000..425b0a9e92efc6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_vincedovy_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_vincedovy CamemBertEmbeddings from vincedovy +author: John Snow Labs +name: dummy_model_vincedovy +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_vincedovy` is a English model originally trained by vincedovy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_vincedovy_en_5.5.0_3.0_1725898750585.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_vincedovy_en_5.5.0_3.0_1725898750585.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_vincedovy","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_vincedovy","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_vincedovy| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/vincedovy/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dummy_model_vincedovy_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_vincedovy_pipeline_en.md new file mode 100644 index 00000000000000..d496c981b0fbdf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dummy_model_vincedovy_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_vincedovy_pipeline pipeline CamemBertEmbeddings from vincedovy +author: John Snow Labs +name: dummy_model_vincedovy_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_vincedovy_pipeline` is a English model originally trained by vincedovy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_vincedovy_pipeline_en_5.5.0_3.0_1725898824510.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_vincedovy_pipeline_en_5.5.0_3.0_1725898824510.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_vincedovy_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_vincedovy_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_vincedovy_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/vincedovy/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-dzoqamodel_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-dzoqamodel_pipeline_en.md new file mode 100644 index 00000000000000..082ace0938fc8f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-dzoqamodel_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English dzoqamodel_pipeline pipeline RoBertaForQuestionAnswering from Norphel +author: John Snow Labs +name: dzoqamodel_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dzoqamodel_pipeline` is a English model originally trained by Norphel. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dzoqamodel_pipeline_en_5.5.0_3.0_1725876693463.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dzoqamodel_pipeline_en_5.5.0_3.0_1725876693463.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dzoqamodel_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dzoqamodel_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dzoqamodel_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|311.2 MB| + +## References + +https://huggingface.co/Norphel/dzoQAmodel + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-electra_classifier_bertic_tweetsentiment_pipeline_xx.md b/docs/_posts/ahmedlone127/2024-09-09-electra_classifier_bertic_tweetsentiment_pipeline_xx.md new file mode 100644 index 00000000000000..f23731ca51edf7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-electra_classifier_bertic_tweetsentiment_pipeline_xx.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Multilingual electra_classifier_bertic_tweetsentiment_pipeline pipeline BertForSequenceClassification from EMBEDDIA +author: John Snow Labs +name: electra_classifier_bertic_tweetsentiment_pipeline +date: 2024-09-09 +tags: [xx, open_source, pipeline, onnx] +task: Text Classification +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`electra_classifier_bertic_tweetsentiment_pipeline` is a Multilingual model originally trained by EMBEDDIA. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/electra_classifier_bertic_tweetsentiment_pipeline_xx_5.5.0_3.0_1725900501079.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/electra_classifier_bertic_tweetsentiment_pipeline_xx_5.5.0_3.0_1725900501079.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("electra_classifier_bertic_tweetsentiment_pipeline", lang = "xx") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("electra_classifier_bertic_tweetsentiment_pipeline", lang = "xx") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|electra_classifier_bertic_tweetsentiment_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|xx| +|Size:|414.9 MB| + +## References + +https://huggingface.co/EMBEDDIA/bertic-tweetsentiment + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-electra_classifier_bertic_tweetsentiment_xx.md b/docs/_posts/ahmedlone127/2024-09-09-electra_classifier_bertic_tweetsentiment_xx.md new file mode 100644 index 00000000000000..e0458890700c4c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-electra_classifier_bertic_tweetsentiment_xx.md @@ -0,0 +1,104 @@ +--- +layout: model +title: Multilingual ElectraForSequenceClassification Cased model (from EMBEDDIA) +author: John Snow Labs +name: electra_classifier_bertic_tweetsentiment +date: 2024-09-09 +tags: [bs, hr, cnr, sr, open_source, electra, sequence_classification, classification, xx, onnx] +task: Text Classification +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained ElectraForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `bertic-tweetsentiment` is a Multilingual model originally trained by `EMBEDDIA`. + +## Predicted Entities + +`Neutral`, `Positive`, `Negative` + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/electra_classifier_bertic_tweetsentiment_xx_5.5.0_3.0_1725900480815.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/electra_classifier_bertic_tweetsentiment_xx_5.5.0_3.0_1725900480815.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +seq_classifier = BertForSequenceClassification.pretrained("electra_classifier_bertic_tweetsentiment","xx") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("class") + +pipeline = Pipeline(stages=[documentAssembler, tokenizer, seq_classifier]) + +data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new DocumentAssembler() + .setInputCols(Array("text")) + .setOutputCols(Array("document")) + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val seq_classifier = BertForSequenceClassification.pretrained("electra_classifier_bertic_tweetsentiment","xx") + .setInputCols(Array("document", "token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, seq_classifier)) + +val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("xx.classify.electra.tweet_sentiment.").predict("""PUT YOUR STRING HERE""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|electra_classifier_bertic_tweetsentiment| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|xx| +|Size:|414.9 MB| + +## References + +References + +- https://huggingface.co/EMBEDDIA/bertic-tweetsentiment \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-electra_qa_base_discriminator_finetuned_squad_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-electra_qa_base_discriminator_finetuned_squad_pipeline_en.md new file mode 100644 index 00000000000000..2f095fb062ad04 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-electra_qa_base_discriminator_finetuned_squad_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English electra_qa_base_discriminator_finetuned_squad_pipeline pipeline BertForQuestionAnswering from usami +author: John Snow Labs +name: electra_qa_base_discriminator_finetuned_squad_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`electra_qa_base_discriminator_finetuned_squad_pipeline` is a English model originally trained by usami. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/electra_qa_base_discriminator_finetuned_squad_pipeline_en_5.5.0_3.0_1725886273398.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/electra_qa_base_discriminator_finetuned_squad_pipeline_en_5.5.0_3.0_1725886273398.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("electra_qa_base_discriminator_finetuned_squad_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("electra_qa_base_discriminator_finetuned_squad_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|electra_qa_base_discriminator_finetuned_squad_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|408.0 MB| + +## References + +https://huggingface.co/usami/electra-base-discriminator-finetuned-squad + +## Included Models + +- MultiDocumentAssembler +- BertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-elicitsbackgroundknowledge_a6000_0_00005_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-elicitsbackgroundknowledge_a6000_0_00005_pipeline_en.md new file mode 100644 index 00000000000000..ac762505afa04b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-elicitsbackgroundknowledge_a6000_0_00005_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English elicitsbackgroundknowledge_a6000_0_00005_pipeline pipeline RoBertaForSequenceClassification from rose-e-wang +author: John Snow Labs +name: elicitsbackgroundknowledge_a6000_0_00005_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`elicitsbackgroundknowledge_a6000_0_00005_pipeline` is a English model originally trained by rose-e-wang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/elicitsbackgroundknowledge_a6000_0_00005_pipeline_en_5.5.0_3.0_1725902354623.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/elicitsbackgroundknowledge_a6000_0_00005_pipeline_en_5.5.0_3.0_1725902354623.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("elicitsbackgroundknowledge_a6000_0_00005_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("elicitsbackgroundknowledge_a6000_0_00005_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|elicitsbackgroundknowledge_a6000_0_00005_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/rose-e-wang/elicitsBackgroundKnowledge_a6000_0.00005 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-ellis_qa_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-ellis_qa_pipeline_en.md new file mode 100644 index 00000000000000..440d91f88bbe8a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-ellis_qa_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English ellis_qa_pipeline pipeline RoBertaForQuestionAnswering from gsl22 +author: John Snow Labs +name: ellis_qa_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ellis_qa_pipeline` is a English model originally trained by gsl22. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ellis_qa_pipeline_en_5.5.0_3.0_1725876518106.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ellis_qa_pipeline_en_5.5.0_3.0_1725876518106.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("ellis_qa_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("ellis_qa_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ellis_qa_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|463.6 MB| + +## References + +https://huggingface.co/gsl22/Ellis-QA + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_en.md b/docs/_posts/ahmedlone127/2024-09-09-emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_en.md new file mode 100644 index 00000000000000..bf36aead4bdd79 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m RoBertaForSequenceClassification from tweettemposhift +author: John Snow Labs +name: emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m` is a English model originally trained by tweettemposhift. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_en_5.5.0_3.0_1725920706587.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_en_5.5.0_3.0_1725920706587.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.4 MB| + +## References + +https://huggingface.co/tweettemposhift/emoji-emoji_random1_seed2-twitter-roberta-base-2022-154m \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_pipeline_en.md new file mode 100644 index 00000000000000..97d789c8176fd1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_pipeline pipeline RoBertaForSequenceClassification from tweettemposhift +author: John Snow Labs +name: emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_pipeline` is a English model originally trained by tweettemposhift. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_pipeline_en_5.5.0_3.0_1725920729486.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_pipeline_en_5.5.0_3.0_1725920729486.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|emoji_emoji_random1_seed2_twitter_roberta_base_2022_154m_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.4 MB| + +## References + +https://huggingface.co/tweettemposhift/emoji-emoji_random1_seed2-twitter-roberta-base-2022-154m + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-emotions_roberta_test_6_en.md b/docs/_posts/ahmedlone127/2024-09-09-emotions_roberta_test_6_en.md new file mode 100644 index 00000000000000..c270e3eef08e01 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-emotions_roberta_test_6_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English emotions_roberta_test_6 RoBertaForSequenceClassification from Zeyu2000 +author: John Snow Labs +name: emotions_roberta_test_6 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`emotions_roberta_test_6` is a English model originally trained by Zeyu2000. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/emotions_roberta_test_6_en_5.5.0_3.0_1725903824026.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/emotions_roberta_test_6_en_5.5.0_3.0_1725903824026.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("emotions_roberta_test_6","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("emotions_roberta_test_6", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|emotions_roberta_test_6| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|456.0 MB| + +## References + +https://huggingface.co/Zeyu2000/emotions-roberta-test-6 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-empathy_model_en.md b/docs/_posts/ahmedlone127/2024-09-09-empathy_model_en.md new file mode 100644 index 00000000000000..363faa7f72753e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-empathy_model_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English empathy_model DistilBertForSequenceClassification from vtiyyal1 +author: John Snow Labs +name: empathy_model +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`empathy_model` is a English model originally trained by vtiyyal1. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/empathy_model_en_5.5.0_3.0_1725872957494.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/empathy_model_en_5.5.0_3.0_1725872957494.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("empathy_model","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("empathy_model", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|empathy_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/vtiyyal1/empathy_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-empathy_model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-empathy_model_pipeline_en.md new file mode 100644 index 00000000000000..b4c42342e4310e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-empathy_model_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English empathy_model_pipeline pipeline DistilBertForSequenceClassification from vtiyyal1 +author: John Snow Labs +name: empathy_model_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`empathy_model_pipeline` is a English model originally trained by vtiyyal1. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/empathy_model_pipeline_en_5.5.0_3.0_1725872969456.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/empathy_model_pipeline_en_5.5.0_3.0_1725872969456.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("empathy_model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("empathy_model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|empathy_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/vtiyyal1/empathy_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-en2zh40_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-en2zh40_pipeline_en.md new file mode 100644 index 00000000000000..d64eae879428f3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-en2zh40_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English en2zh40_pipeline pipeline MarianTransformer from Carlosino +author: John Snow Labs +name: en2zh40_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`en2zh40_pipeline` is a English model originally trained by Carlosino. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/en2zh40_pipeline_en_5.5.0_3.0_1725913264591.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/en2zh40_pipeline_en_5.5.0_3.0_1725913264591.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("en2zh40_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("en2zh40_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|en2zh40_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|541.5 MB| + +## References + +https://huggingface.co/Carlosino/en2zh40 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-english_tonga_tonga_islands_romanian_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-english_tonga_tonga_islands_romanian_pipeline_en.md new file mode 100644 index 00000000000000..8220d98e8aaa64 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-english_tonga_tonga_islands_romanian_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English english_tonga_tonga_islands_romanian_pipeline pipeline MarianTransformer from sanjeev498 +author: John Snow Labs +name: english_tonga_tonga_islands_romanian_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`english_tonga_tonga_islands_romanian_pipeline` is a English model originally trained by sanjeev498. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/english_tonga_tonga_islands_romanian_pipeline_en_5.5.0_3.0_1725864340724.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/english_tonga_tonga_islands_romanian_pipeline_en_5.5.0_3.0_1725864340724.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("english_tonga_tonga_islands_romanian_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("english_tonga_tonga_islands_romanian_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|english_tonga_tonga_islands_romanian_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|509.1 MB| + +## References + +https://huggingface.co/sanjeev498/en-to-romanian + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-eriberta_base_en.md b/docs/_posts/ahmedlone127/2024-09-09-eriberta_base_en.md new file mode 100644 index 00000000000000..9890917268ec76 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-eriberta_base_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English eriberta_base RoBertaEmbeddings from HiTZ +author: John Snow Labs +name: eriberta_base +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`eriberta_base` is a English model originally trained by HiTZ. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/eriberta_base_en_5.5.0_3.0_1725910370944.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/eriberta_base_en_5.5.0_3.0_1725910370944.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("eriberta_base","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("eriberta_base","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|eriberta_base| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|505.5 MB| + +## References + +https://huggingface.co/HiTZ/EriBERTa-base \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-eriberta_base_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-eriberta_base_pipeline_en.md new file mode 100644 index 00000000000000..4b95bde499992f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-eriberta_base_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English eriberta_base_pipeline pipeline RoBertaEmbeddings from HiTZ +author: John Snow Labs +name: eriberta_base_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`eriberta_base_pipeline` is a English model originally trained by HiTZ. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/eriberta_base_pipeline_en_5.5.0_3.0_1725910396509.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/eriberta_base_pipeline_en_5.5.0_3.0_1725910396509.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("eriberta_base_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("eriberta_base_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|eriberta_base_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|505.6 MB| + +## References + +https://huggingface.co/HiTZ/EriBERTa-base + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-essai1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-essai1_pipeline_en.md new file mode 100644 index 00000000000000..780efee32f1bc2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-essai1_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English essai1_pipeline pipeline MarianTransformer from Maya +author: John Snow Labs +name: essai1_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`essai1_pipeline` is a English model originally trained by Maya. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/essai1_pipeline_en_5.5.0_3.0_1725891378399.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/essai1_pipeline_en_5.5.0_3.0_1725891378399.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("essai1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("essai1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|essai1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|509.1 MB| + +## References + +https://huggingface.co/Maya/essai1 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-eth_setfit_payment_model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-eth_setfit_payment_model_pipeline_en.md new file mode 100644 index 00000000000000..1099c3a164bbc3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-eth_setfit_payment_model_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English eth_setfit_payment_model_pipeline pipeline MPNetEmbeddings from kanixwang +author: John Snow Labs +name: eth_setfit_payment_model_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`eth_setfit_payment_model_pipeline` is a English model originally trained by kanixwang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/eth_setfit_payment_model_pipeline_en_5.5.0_3.0_1725896574476.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/eth_setfit_payment_model_pipeline_en_5.5.0_3.0_1725896574476.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("eth_setfit_payment_model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("eth_setfit_payment_model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|eth_setfit_payment_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|407.0 MB| + +## References + +https://huggingface.co/kanixwang/eth-setfit-payment-model + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-f_80_20_en.md b/docs/_posts/ahmedlone127/2024-09-09-f_80_20_en.md new file mode 100644 index 00000000000000..5c2a81a2a712df --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-f_80_20_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English f_80_20 RoBertaForSequenceClassification from tegaranggana +author: John Snow Labs +name: f_80_20 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`f_80_20` is a English model originally trained by tegaranggana. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/f_80_20_en_5.5.0_3.0_1725903145284.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/f_80_20_en_5.5.0_3.0_1725903145284.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("f_80_20","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("f_80_20", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|f_80_20| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|467.7 MB| + +## References + +https://huggingface.co/tegaranggana/f_80_20 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-f_roberta_classifier2_en.md b/docs/_posts/ahmedlone127/2024-09-09-f_roberta_classifier2_en.md new file mode 100644 index 00000000000000..363b68cf6c2b73 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-f_roberta_classifier2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English f_roberta_classifier2 RoBertaForSequenceClassification from James-kc-min +author: John Snow Labs +name: f_roberta_classifier2 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`f_roberta_classifier2` is a English model originally trained by James-kc-min. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/f_roberta_classifier2_en_5.5.0_3.0_1725911304904.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/f_roberta_classifier2_en_5.5.0_3.0_1725911304904.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("f_roberta_classifier2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("f_roberta_classifier2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|f_roberta_classifier2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|309.0 MB| + +## References + +https://huggingface.co/James-kc-min/F_Roberta_classifier2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-facets_ep3_1234_en.md b/docs/_posts/ahmedlone127/2024-09-09-facets_ep3_1234_en.md new file mode 100644 index 00000000000000..c74a8f8b998cdf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-facets_ep3_1234_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English facets_ep3_1234 MPNetEmbeddings from ingeol +author: John Snow Labs +name: facets_ep3_1234 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`facets_ep3_1234` is a English model originally trained by ingeol. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/facets_ep3_1234_en_5.5.0_3.0_1725896821096.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/facets_ep3_1234_en_5.5.0_3.0_1725896821096.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("facets_ep3_1234","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("facets_ep3_1234","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|facets_ep3_1234| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/ingeol/facets_ep3_1234 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-faqs_distillbert_en.md b/docs/_posts/ahmedlone127/2024-09-09-faqs_distillbert_en.md new file mode 100644 index 00000000000000..e945bc179015a1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-faqs_distillbert_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English faqs_distillbert DistilBertForQuestionAnswering from Sybghat +author: John Snow Labs +name: faqs_distillbert +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`faqs_distillbert` is a English model originally trained by Sybghat. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/faqs_distillbert_en_5.5.0_3.0_1725877455743.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/faqs_distillbert_en_5.5.0_3.0_1725877455743.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("faqs_distillbert","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("faqs_distillbert", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|faqs_distillbert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Sybghat/FAQs_DistillBERT \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-fill_mask_ondiet_en.md b/docs/_posts/ahmedlone127/2024-09-09-fill_mask_ondiet_en.md new file mode 100644 index 00000000000000..c3d773ba0e7244 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-fill_mask_ondiet_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English fill_mask_ondiet DistilBertEmbeddings from Ondiet +author: John Snow Labs +name: fill_mask_ondiet +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fill_mask_ondiet` is a English model originally trained by Ondiet. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fill_mask_ondiet_en_5.5.0_3.0_1725905388680.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fill_mask_ondiet_en_5.5.0_3.0_1725905388680.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("fill_mask_ondiet","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("fill_mask_ondiet","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fill_mask_ondiet| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Ondiet/fill_mask \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-fill_mask_ondiet_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-fill_mask_ondiet_pipeline_en.md new file mode 100644 index 00000000000000..97b12a3aac5c53 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-fill_mask_ondiet_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English fill_mask_ondiet_pipeline pipeline DistilBertEmbeddings from Ondiet +author: John Snow Labs +name: fill_mask_ondiet_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fill_mask_ondiet_pipeline` is a English model originally trained by Ondiet. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fill_mask_ondiet_pipeline_en_5.5.0_3.0_1725905400415.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fill_mask_ondiet_pipeline_en_5.5.0_3.0_1725905400415.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("fill_mask_ondiet_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("fill_mask_ondiet_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fill_mask_ondiet_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/Ondiet/fill_mask + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-fin_mpnet_base_en.md b/docs/_posts/ahmedlone127/2024-09-09-fin_mpnet_base_en.md new file mode 100644 index 00000000000000..f180bc425770b2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-fin_mpnet_base_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English fin_mpnet_base MPNetEmbeddings from mukaj +author: John Snow Labs +name: fin_mpnet_base +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fin_mpnet_base` is a English model originally trained by mukaj. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fin_mpnet_base_en_5.5.0_3.0_1725897357688.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fin_mpnet_base_en_5.5.0_3.0_1725897357688.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("fin_mpnet_base","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("fin_mpnet_base","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fin_mpnet_base| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/mukaj/fin-mpnet-base \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-final_model3_en.md b/docs/_posts/ahmedlone127/2024-09-09-final_model3_en.md new file mode 100644 index 00000000000000..1ca87776e27abb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-final_model3_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English final_model3 MarianTransformer from sanghyo +author: John Snow Labs +name: final_model3 +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`final_model3` is a English model originally trained by sanghyo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/final_model3_en_5.5.0_3.0_1725863141694.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/final_model3_en_5.5.0_3.0_1725863141694.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("final_model3","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("final_model3","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|final_model3| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|1.0 GB| + +## References + +https://huggingface.co/sanghyo/Final_model3 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-financial_sentiment_analysis_sigma_en.md b/docs/_posts/ahmedlone127/2024-09-09-financial_sentiment_analysis_sigma_en.md new file mode 100644 index 00000000000000..5ec34750c69b7a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-financial_sentiment_analysis_sigma_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English financial_sentiment_analysis_sigma BertForSequenceClassification from Sigma +author: John Snow Labs +name: financial_sentiment_analysis_sigma +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`financial_sentiment_analysis_sigma` is a English model originally trained by Sigma. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/financial_sentiment_analysis_sigma_en_5.5.0_3.0_1725900276456.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/financial_sentiment_analysis_sigma_en_5.5.0_3.0_1725900276456.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("financial_sentiment_analysis_sigma","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("financial_sentiment_analysis_sigma", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|financial_sentiment_analysis_sigma| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|411.6 MB| + +## References + +https://huggingface.co/Sigma/financial-sentiment-analysis \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-fine_tune_embeddnew_sih_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-fine_tune_embeddnew_sih_pipeline_en.md new file mode 100644 index 00000000000000..9f184ee8c4b998 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-fine_tune_embeddnew_sih_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English fine_tune_embeddnew_sih_pipeline pipeline BertForSequenceClassification from shashaaa +author: John Snow Labs +name: fine_tune_embeddnew_sih_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fine_tune_embeddnew_sih_pipeline` is a English model originally trained by shashaaa. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fine_tune_embeddnew_sih_pipeline_en_5.5.0_3.0_1725900024757.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fine_tune_embeddnew_sih_pipeline_en_5.5.0_3.0_1725900024757.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("fine_tune_embeddnew_sih_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("fine_tune_embeddnew_sih_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fine_tune_embeddnew_sih_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|409.5 MB| + +## References + +https://huggingface.co/shashaaa/fine_tune_embeddnew_SIH + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-fine_tuned_distilbert_vcolella_en.md b/docs/_posts/ahmedlone127/2024-09-09-fine_tuned_distilbert_vcolella_en.md new file mode 100644 index 00000000000000..6ea1159d8d32ef --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-fine_tuned_distilbert_vcolella_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English fine_tuned_distilbert_vcolella DistilBertForSequenceClassification from vcolella +author: John Snow Labs +name: fine_tuned_distilbert_vcolella +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fine_tuned_distilbert_vcolella` is a English model originally trained by vcolella. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fine_tuned_distilbert_vcolella_en_5.5.0_3.0_1725873352377.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fine_tuned_distilbert_vcolella_en_5.5.0_3.0_1725873352377.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("fine_tuned_distilbert_vcolella","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("fine_tuned_distilbert_vcolella", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fine_tuned_distilbert_vcolella| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/vcolella/fine-tuned-distilbert \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-fine_tuned_qabert_small_en.md b/docs/_posts/ahmedlone127/2024-09-09-fine_tuned_qabert_small_en.md new file mode 100644 index 00000000000000..753b2271179af5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-fine_tuned_qabert_small_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English fine_tuned_qabert_small DistilBertForQuestionAnswering from jayvinay +author: John Snow Labs +name: fine_tuned_qabert_small +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fine_tuned_qabert_small` is a English model originally trained by jayvinay. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fine_tuned_qabert_small_en_5.5.0_3.0_1725892623280.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fine_tuned_qabert_small_en_5.5.0_3.0_1725892623280.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("fine_tuned_qabert_small","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("fine_tuned_qabert_small", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fine_tuned_qabert_small| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/jayvinay/fine-tuned-qabert-small \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-fine_tuned_qabert_small_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-fine_tuned_qabert_small_pipeline_en.md new file mode 100644 index 00000000000000..604048f7584ceb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-fine_tuned_qabert_small_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English fine_tuned_qabert_small_pipeline pipeline DistilBertForQuestionAnswering from jayvinay +author: John Snow Labs +name: fine_tuned_qabert_small_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fine_tuned_qabert_small_pipeline` is a English model originally trained by jayvinay. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fine_tuned_qabert_small_pipeline_en_5.5.0_3.0_1725892637018.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fine_tuned_qabert_small_pipeline_en_5.5.0_3.0_1725892637018.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("fine_tuned_qabert_small_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("fine_tuned_qabert_small_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fine_tuned_qabert_small_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/jayvinay/fine-tuned-qabert-small + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-finetuned_roberta_base_sentiment_classifier_en.md b/docs/_posts/ahmedlone127/2024-09-09-finetuned_roberta_base_sentiment_classifier_en.md new file mode 100644 index 00000000000000..8d928863c3dcc1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-finetuned_roberta_base_sentiment_classifier_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English finetuned_roberta_base_sentiment_classifier RoBertaForSequenceClassification from gArthur98 +author: John Snow Labs +name: finetuned_roberta_base_sentiment_classifier +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuned_roberta_base_sentiment_classifier` is a English model originally trained by gArthur98. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuned_roberta_base_sentiment_classifier_en_5.5.0_3.0_1725912307774.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuned_roberta_base_sentiment_classifier_en_5.5.0_3.0_1725912307774.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("finetuned_roberta_base_sentiment_classifier","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("finetuned_roberta_base_sentiment_classifier", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuned_roberta_base_sentiment_classifier| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|424.8 MB| + +## References + +https://huggingface.co/gArthur98/Finetuned-Roberta-Base-Sentiment-classifier \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-finetuned_sentiment_bert_base_model_en.md b/docs/_posts/ahmedlone127/2024-09-09-finetuned_sentiment_bert_base_model_en.md new file mode 100644 index 00000000000000..4c679be32a19a9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-finetuned_sentiment_bert_base_model_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English finetuned_sentiment_bert_base_model BertForSequenceClassification from pursuitofds +author: John Snow Labs +name: finetuned_sentiment_bert_base_model +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuned_sentiment_bert_base_model` is a English model originally trained by pursuitofds. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuned_sentiment_bert_base_model_en_5.5.0_3.0_1725900147767.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuned_sentiment_bert_base_model_en_5.5.0_3.0_1725900147767.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("finetuned_sentiment_bert_base_model","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("finetuned_sentiment_bert_base_model", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuned_sentiment_bert_base_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/pursuitofds/finetuned_sentiment_bert_base_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-finetuned_sentiment_classfication_roberta_model_preencez_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-finetuned_sentiment_classfication_roberta_model_preencez_pipeline_en.md new file mode 100644 index 00000000000000..c8ea8fc00436c6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-finetuned_sentiment_classfication_roberta_model_preencez_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English finetuned_sentiment_classfication_roberta_model_preencez_pipeline pipeline RoBertaForSequenceClassification from Preencez +author: John Snow Labs +name: finetuned_sentiment_classfication_roberta_model_preencez_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuned_sentiment_classfication_roberta_model_preencez_pipeline` is a English model originally trained by Preencez. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuned_sentiment_classfication_roberta_model_preencez_pipeline_en_5.5.0_3.0_1725904043293.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuned_sentiment_classfication_roberta_model_preencez_pipeline_en_5.5.0_3.0_1725904043293.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finetuned_sentiment_classfication_roberta_model_preencez_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finetuned_sentiment_classfication_roberta_model_preencez_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuned_sentiment_classfication_roberta_model_preencez_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|444.0 MB| + +## References + +https://huggingface.co/Preencez/finetuned-Sentiment-classfication-ROBERTA-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-finetunes_bert_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-finetunes_bert_pipeline_en.md new file mode 100644 index 00000000000000..27f0c42eab0725 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-finetunes_bert_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English finetunes_bert_pipeline pipeline DistilBertEmbeddings from Mobina2023 +author: John Snow Labs +name: finetunes_bert_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetunes_bert_pipeline` is a English model originally trained by Mobina2023. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetunes_bert_pipeline_en_5.5.0_3.0_1725921635431.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetunes_bert_pipeline_en_5.5.0_3.0_1725921635431.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finetunes_bert_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finetunes_bert_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetunes_bert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Mobina2023/finetunes-bert + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-finetuning_emotion_model_5_v2_en.md b/docs/_posts/ahmedlone127/2024-09-09-finetuning_emotion_model_5_v2_en.md new file mode 100644 index 00000000000000..4677fe795d2bae --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-finetuning_emotion_model_5_v2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English finetuning_emotion_model_5_v2 DistilBertForSequenceClassification from Almancy +author: John Snow Labs +name: finetuning_emotion_model_5_v2 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuning_emotion_model_5_v2` is a English model originally trained by Almancy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuning_emotion_model_5_v2_en_5.5.0_3.0_1725873262157.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuning_emotion_model_5_v2_en_5.5.0_3.0_1725873262157.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("finetuning_emotion_model_5_v2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("finetuning_emotion_model_5_v2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuning_emotion_model_5_v2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/Almancy/finetuning-emotion-model-5-v2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-finetuning_emotion_model_5_v2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-finetuning_emotion_model_5_v2_pipeline_en.md new file mode 100644 index 00000000000000..2abffc441cc112 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-finetuning_emotion_model_5_v2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English finetuning_emotion_model_5_v2_pipeline pipeline DistilBertForSequenceClassification from Almancy +author: John Snow Labs +name: finetuning_emotion_model_5_v2_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuning_emotion_model_5_v2_pipeline` is a English model originally trained by Almancy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuning_emotion_model_5_v2_pipeline_en_5.5.0_3.0_1725873274360.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuning_emotion_model_5_v2_pipeline_en_5.5.0_3.0_1725873274360.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("finetuning_emotion_model_5_v2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("finetuning_emotion_model_5_v2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuning_emotion_model_5_v2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/Almancy/finetuning-emotion-model-5-v2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-floor_model_en.md b/docs/_posts/ahmedlone127/2024-09-09-floor_model_en.md new file mode 100644 index 00000000000000..0570903562ec55 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-floor_model_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English floor_model DistilBertForQuestionAnswering from rugvedabodke +author: John Snow Labs +name: floor_model +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`floor_model` is a English model originally trained by rugvedabodke. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/floor_model_en_5.5.0_3.0_1725868928303.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/floor_model_en_5.5.0_3.0_1725868928303.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("floor_model","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("floor_model", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|floor_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/rugvedabodke/floor_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-from_classifier_v2_en.md b/docs/_posts/ahmedlone127/2024-09-09-from_classifier_v2_en.md new file mode 100644 index 00000000000000..d382d82059c934 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-from_classifier_v2_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English from_classifier_v2 MPNetEmbeddings from futuredatascience +author: John Snow Labs +name: from_classifier_v2 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`from_classifier_v2` is a English model originally trained by futuredatascience. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/from_classifier_v2_en_5.5.0_3.0_1725897072353.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/from_classifier_v2_en_5.5.0_3.0_1725897072353.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("from_classifier_v2","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("from_classifier_v2","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|from_classifier_v2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/futuredatascience/from-classifier-v2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-from_classifier_v2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-from_classifier_v2_pipeline_en.md new file mode 100644 index 00000000000000..e60a688618bdf4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-from_classifier_v2_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English from_classifier_v2_pipeline pipeline MPNetEmbeddings from futuredatascience +author: John Snow Labs +name: from_classifier_v2_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`from_classifier_v2_pipeline` is a English model originally trained by futuredatascience. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/from_classifier_v2_pipeline_en_5.5.0_3.0_1725897093045.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/from_classifier_v2_pipeline_en_5.5.0_3.0_1725897093045.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("from_classifier_v2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("from_classifier_v2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|from_classifier_v2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/futuredatascience/from-classifier-v2 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-gefs_language_detector_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-gefs_language_detector_pipeline_en.md new file mode 100644 index 00000000000000..da6f5246e13d37 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-gefs_language_detector_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English gefs_language_detector_pipeline pipeline XlmRoBertaForSequenceClassification from ImranzamanML +author: John Snow Labs +name: gefs_language_detector_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`gefs_language_detector_pipeline` is a English model originally trained by ImranzamanML. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/gefs_language_detector_pipeline_en_5.5.0_3.0_1725871778900.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/gefs_language_detector_pipeline_en_5.5.0_3.0_1725871778900.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("gefs_language_detector_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("gefs_language_detector_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|gefs_language_detector_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|805.2 MB| + +## References + +https://huggingface.co/ImranzamanML/GEFS-language-detector + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-geo_multi_qa_mpnet_base_dot_v1_en.md b/docs/_posts/ahmedlone127/2024-09-09-geo_multi_qa_mpnet_base_dot_v1_en.md new file mode 100644 index 00000000000000..9edde05ff0f3ae --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-geo_multi_qa_mpnet_base_dot_v1_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English geo_multi_qa_mpnet_base_dot_v1 MPNetEmbeddings from chbwang +author: John Snow Labs +name: geo_multi_qa_mpnet_base_dot_v1 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`geo_multi_qa_mpnet_base_dot_v1` is a English model originally trained by chbwang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/geo_multi_qa_mpnet_base_dot_v1_en_5.5.0_3.0_1725896764639.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/geo_multi_qa_mpnet_base_dot_v1_en_5.5.0_3.0_1725896764639.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("geo_multi_qa_mpnet_base_dot_v1","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("geo_multi_qa_mpnet_base_dot_v1","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|geo_multi_qa_mpnet_base_dot_v1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.0 MB| + +## References + +https://huggingface.co/chbwang/geo_multi-qa-mpnet-base-dot-v1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-granite_guardian_hap_125m_en.md b/docs/_posts/ahmedlone127/2024-09-09-granite_guardian_hap_125m_en.md new file mode 100644 index 00000000000000..3696a7197deef7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-granite_guardian_hap_125m_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English granite_guardian_hap_125m RoBertaForSequenceClassification from ibm-granite +author: John Snow Labs +name: granite_guardian_hap_125m +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`granite_guardian_hap_125m` is a English model originally trained by ibm-granite. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/granite_guardian_hap_125m_en_5.5.0_3.0_1725904829489.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/granite_guardian_hap_125m_en_5.5.0_3.0_1725904829489.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("granite_guardian_hap_125m","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("granite_guardian_hap_125m", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|granite_guardian_hap_125m| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.2 MB| + +## References + +https://huggingface.co/ibm-granite/granite-guardian-hap-125m \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-hello_english_hindi_translate_1_en.md b/docs/_posts/ahmedlone127/2024-09-09-hello_english_hindi_translate_1_en.md new file mode 100644 index 00000000000000..3cace6547374d9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-hello_english_hindi_translate_1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English hello_english_hindi_translate_1 MarianTransformer from Sarthak7777 +author: John Snow Labs +name: hello_english_hindi_translate_1 +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`hello_english_hindi_translate_1` is a English model originally trained by Sarthak7777. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/hello_english_hindi_translate_1_en_5.5.0_3.0_1725864425956.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/hello_english_hindi_translate_1_en_5.5.0_3.0_1725864425956.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("hello_english_hindi_translate_1","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("hello_english_hindi_translate_1","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|hello_english_hindi_translate_1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|523.1 MB| + +## References + +https://huggingface.co/Sarthak7777/hello_en_hi_translate-1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-helsinki_danish_swedish_v10_en.md b/docs/_posts/ahmedlone127/2024-09-09-helsinki_danish_swedish_v10_en.md new file mode 100644 index 00000000000000..1e7318439cec53 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-helsinki_danish_swedish_v10_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English helsinki_danish_swedish_v10 MarianTransformer from Danieljacobsen +author: John Snow Labs +name: helsinki_danish_swedish_v10 +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`helsinki_danish_swedish_v10` is a English model originally trained by Danieljacobsen. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/helsinki_danish_swedish_v10_en_5.5.0_3.0_1725891199813.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/helsinki_danish_swedish_v10_en_5.5.0_3.0_1725891199813.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("helsinki_danish_swedish_v10","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("helsinki_danish_swedish_v10","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|helsinki_danish_swedish_v10| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|496.9 MB| + +## References + +https://huggingface.co/Danieljacobsen/Helsinki-DA-SV-v10 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-helsinki_english_multiple_languages_test_01_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-helsinki_english_multiple_languages_test_01_pipeline_en.md new file mode 100644 index 00000000000000..7cb1ce5be75126 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-helsinki_english_multiple_languages_test_01_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English helsinki_english_multiple_languages_test_01_pipeline pipeline MarianTransformer from Shularp +author: John Snow Labs +name: helsinki_english_multiple_languages_test_01_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`helsinki_english_multiple_languages_test_01_pipeline` is a English model originally trained by Shularp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/helsinki_english_multiple_languages_test_01_pipeline_en_5.5.0_3.0_1725913683170.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/helsinki_english_multiple_languages_test_01_pipeline_en_5.5.0_3.0_1725913683170.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("helsinki_english_multiple_languages_test_01_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("helsinki_english_multiple_languages_test_01_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|helsinki_english_multiple_languages_test_01_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|530.9 MB| + +## References + +https://huggingface.co/Shularp/Helsinki_en-mul_test_01 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-homework_1_en.md b/docs/_posts/ahmedlone127/2024-09-09-homework_1_en.md new file mode 100644 index 00000000000000..75cef68b2e3382 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-homework_1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English homework_1 DistilBertForSequenceClassification from Stonekraken +author: John Snow Labs +name: homework_1 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`homework_1` is a English model originally trained by Stonekraken. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/homework_1_en_5.5.0_3.0_1725873136310.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/homework_1_en_5.5.0_3.0_1725873136310.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("homework_1","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("homework_1", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|homework_1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/Stonekraken/homework_1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-homework_1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-homework_1_pipeline_en.md new file mode 100644 index 00000000000000..78d17bd36a9740 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-homework_1_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English homework_1_pipeline pipeline DistilBertForSequenceClassification from Stonekraken +author: John Snow Labs +name: homework_1_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`homework_1_pipeline` is a English model originally trained by Stonekraken. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/homework_1_pipeline_en_5.5.0_3.0_1725873148783.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/homework_1_pipeline_en_5.5.0_3.0_1725873148783.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("homework_1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("homework_1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|homework_1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/Stonekraken/homework_1 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-hoogberta_pipeline_th.md b/docs/_posts/ahmedlone127/2024-09-09-hoogberta_pipeline_th.md new file mode 100644 index 00000000000000..51f4c149a05ff8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-hoogberta_pipeline_th.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Thai hoogberta_pipeline pipeline RoBertaEmbeddings from lst-nectec +author: John Snow Labs +name: hoogberta_pipeline +date: 2024-09-09 +tags: [th, open_source, pipeline, onnx] +task: Embeddings +language: th +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`hoogberta_pipeline` is a Thai model originally trained by lst-nectec. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/hoogberta_pipeline_th_5.5.0_3.0_1725910250907.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/hoogberta_pipeline_th_5.5.0_3.0_1725910250907.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("hoogberta_pipeline", lang = "th") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("hoogberta_pipeline", lang = "th") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|hoogberta_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|th| +|Size:|342.2 MB| + +## References + +https://huggingface.co/lst-nectec/HoogBERTa + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-hoogberta_th.md b/docs/_posts/ahmedlone127/2024-09-09-hoogberta_th.md new file mode 100644 index 00000000000000..e2383c508b96eb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-hoogberta_th.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Thai hoogberta RoBertaEmbeddings from lst-nectec +author: John Snow Labs +name: hoogberta +date: 2024-09-09 +tags: [th, open_source, onnx, embeddings, roberta] +task: Embeddings +language: th +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`hoogberta` is a Thai model originally trained by lst-nectec. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/hoogberta_th_5.5.0_3.0_1725910152627.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/hoogberta_th_5.5.0_3.0_1725910152627.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("hoogberta","th") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("hoogberta","th") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|hoogberta| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|th| +|Size:|342.2 MB| + +## References + +https://huggingface.co/lst-nectec/HoogBERTa \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-hospitality_intents_pretrained_en.md b/docs/_posts/ahmedlone127/2024-09-09-hospitality_intents_pretrained_en.md new file mode 100644 index 00000000000000..9bf1d67e468da2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-hospitality_intents_pretrained_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English hospitality_intents_pretrained RoBertaForSequenceClassification from WellaBanda +author: John Snow Labs +name: hospitality_intents_pretrained +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`hospitality_intents_pretrained` is a English model originally trained by WellaBanda. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/hospitality_intents_pretrained_en_5.5.0_3.0_1725912358940.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/hospitality_intents_pretrained_en_5.5.0_3.0_1725912358940.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("hospitality_intents_pretrained","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("hospitality_intents_pretrained", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|hospitality_intents_pretrained| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|430.1 MB| + +## References + +https://huggingface.co/WellaBanda/hospitality_intents_pretrained \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-inde_0_en.md b/docs/_posts/ahmedlone127/2024-09-09-inde_0_en.md new file mode 100644 index 00000000000000..94a8d0fb6569a1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-inde_0_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English inde_0 RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: inde_0 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`inde_0` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/inde_0_en_5.5.0_3.0_1725902518692.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/inde_0_en_5.5.0_3.0_1725902518692.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("inde_0","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("inde_0", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|inde_0| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Inde_0 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-inde_0_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-inde_0_pipeline_en.md new file mode 100644 index 00000000000000..5a818529604912 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-inde_0_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English inde_0_pipeline pipeline RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: inde_0_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`inde_0_pipeline` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/inde_0_pipeline_en_5.5.0_3.0_1725902540430.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/inde_0_pipeline_en_5.5.0_3.0_1725902540430.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("inde_0_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("inde_0_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|inde_0_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Inde_0 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-ipc_level1_h_en.md b/docs/_posts/ahmedlone127/2024-09-09-ipc_level1_h_en.md new file mode 100644 index 00000000000000..1a73676bc65bd9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-ipc_level1_h_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English ipc_level1_h RoBertaForSequenceClassification from intelcomp +author: John Snow Labs +name: ipc_level1_h +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ipc_level1_h` is a English model originally trained by intelcomp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ipc_level1_h_en_5.5.0_3.0_1725903915919.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ipc_level1_h_en_5.5.0_3.0_1725903915919.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("ipc_level1_h","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("ipc_level1_h", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ipc_level1_h| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/intelcomp/ipc_level1_H \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-ipc_level1_h_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-ipc_level1_h_pipeline_en.md new file mode 100644 index 00000000000000..2564e7eba8b049 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-ipc_level1_h_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English ipc_level1_h_pipeline pipeline RoBertaForSequenceClassification from intelcomp +author: John Snow Labs +name: ipc_level1_h_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ipc_level1_h_pipeline` is a English model originally trained by intelcomp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ipc_level1_h_pipeline_en_5.5.0_3.0_1725903993959.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ipc_level1_h_pipeline_en_5.5.0_3.0_1725903993959.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("ipc_level1_h_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("ipc_level1_h_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ipc_level1_h_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/intelcomp/ipc_level1_H + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-iwslt17_marian_big_ctx4_cwd2_english_french_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-iwslt17_marian_big_ctx4_cwd2_english_french_pipeline_en.md new file mode 100644 index 00000000000000..6f214dd89c44b6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-iwslt17_marian_big_ctx4_cwd2_english_french_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English iwslt17_marian_big_ctx4_cwd2_english_french_pipeline pipeline MarianTransformer from context-mt +author: John Snow Labs +name: iwslt17_marian_big_ctx4_cwd2_english_french_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`iwslt17_marian_big_ctx4_cwd2_english_french_pipeline` is a English model originally trained by context-mt. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/iwslt17_marian_big_ctx4_cwd2_english_french_pipeline_en_5.5.0_3.0_1725914517141.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/iwslt17_marian_big_ctx4_cwd2_english_french_pipeline_en_5.5.0_3.0_1725914517141.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("iwslt17_marian_big_ctx4_cwd2_english_french_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("iwslt17_marian_big_ctx4_cwd2_english_french_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|iwslt17_marian_big_ctx4_cwd2_english_french_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/context-mt/iwslt17-marian-big-ctx4-cwd2-en-fr + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-iwslt17_marian_small_ctx4_cwd2_english_french_en.md b/docs/_posts/ahmedlone127/2024-09-09-iwslt17_marian_small_ctx4_cwd2_english_french_en.md new file mode 100644 index 00000000000000..4ad5f7c7d924ea --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-iwslt17_marian_small_ctx4_cwd2_english_french_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English iwslt17_marian_small_ctx4_cwd2_english_french MarianTransformer from context-mt +author: John Snow Labs +name: iwslt17_marian_small_ctx4_cwd2_english_french +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`iwslt17_marian_small_ctx4_cwd2_english_french` is a English model originally trained by context-mt. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/iwslt17_marian_small_ctx4_cwd2_english_french_en_5.5.0_3.0_1725863233649.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/iwslt17_marian_small_ctx4_cwd2_english_french_en_5.5.0_3.0_1725863233649.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("iwslt17_marian_small_ctx4_cwd2_english_french","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("iwslt17_marian_small_ctx4_cwd2_english_french","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|iwslt17_marian_small_ctx4_cwd2_english_french| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.4 MB| + +## References + +https://huggingface.co/context-mt/iwslt17-marian-small-ctx4-cwd2-en-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-jerteh_355_sr.md b/docs/_posts/ahmedlone127/2024-09-09-jerteh_355_sr.md new file mode 100644 index 00000000000000..35caa11b3ce10c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-jerteh_355_sr.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Serbian jerteh_355 RoBertaEmbeddings from jerteh +author: John Snow Labs +name: jerteh_355 +date: 2024-09-09 +tags: [sr, open_source, onnx, embeddings, roberta] +task: Embeddings +language: sr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`jerteh_355` is a Serbian model originally trained by jerteh. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/jerteh_355_sr_5.5.0_3.0_1725910760359.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/jerteh_355_sr_5.5.0_3.0_1725910760359.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("jerteh_355","sr") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("jerteh_355","sr") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|jerteh_355| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|sr| +|Size:|1.3 GB| + +## References + +https://huggingface.co/jerteh/Jerteh-355 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-koelectra_small_v2_distilled_korquad_384_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-koelectra_small_v2_distilled_korquad_384_pipeline_en.md new file mode 100644 index 00000000000000..910fe892fd99f8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-koelectra_small_v2_distilled_korquad_384_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English koelectra_small_v2_distilled_korquad_384_pipeline pipeline RoBertaForQuestionAnswering from Mary8 +author: John Snow Labs +name: koelectra_small_v2_distilled_korquad_384_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`koelectra_small_v2_distilled_korquad_384_pipeline` is a English model originally trained by Mary8. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/koelectra_small_v2_distilled_korquad_384_pipeline_en_5.5.0_3.0_1725867372956.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/koelectra_small_v2_distilled_korquad_384_pipeline_en_5.5.0_3.0_1725867372956.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("koelectra_small_v2_distilled_korquad_384_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("koelectra_small_v2_distilled_korquad_384_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|koelectra_small_v2_distilled_korquad_384_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|306.9 MB| + +## References + +https://huggingface.co/Mary8/koelectra-small-v2-distilled-korquad-384 + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-lab1_random_reshphil_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-lab1_random_reshphil_pipeline_en.md new file mode 100644 index 00000000000000..1961079b665b16 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-lab1_random_reshphil_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English lab1_random_reshphil_pipeline pipeline MarianTransformer from Reshphil +author: John Snow Labs +name: lab1_random_reshphil_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lab1_random_reshphil_pipeline` is a English model originally trained by Reshphil. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lab1_random_reshphil_pipeline_en_5.5.0_3.0_1725914245273.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lab1_random_reshphil_pipeline_en_5.5.0_3.0_1725914245273.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("lab1_random_reshphil_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("lab1_random_reshphil_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lab1_random_reshphil_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|508.7 MB| + +## References + +https://huggingface.co/Reshphil/lab1_random + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-lab2_adam_reshphil_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-lab2_adam_reshphil_pipeline_en.md new file mode 100644 index 00000000000000..2964cd69b65c63 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-lab2_adam_reshphil_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English lab2_adam_reshphil_pipeline pipeline MarianTransformer from Reshphil +author: John Snow Labs +name: lab2_adam_reshphil_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lab2_adam_reshphil_pipeline` is a English model originally trained by Reshphil. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lab2_adam_reshphil_pipeline_en_5.5.0_3.0_1725840856796.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lab2_adam_reshphil_pipeline_en_5.5.0_3.0_1725840856796.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("lab2_adam_reshphil_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("lab2_adam_reshphil_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lab2_adam_reshphil_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|508.9 MB| + +## References + +https://huggingface.co/Reshphil/lab2_adam + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-latin_english_case_en.md b/docs/_posts/ahmedlone127/2024-09-09-latin_english_case_en.md new file mode 100644 index 00000000000000..57c13af625817c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-latin_english_case_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English latin_english_case MarianTransformer from grosenthal +author: John Snow Labs +name: latin_english_case +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`latin_english_case` is a English model originally trained by grosenthal. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/latin_english_case_en_5.5.0_3.0_1725865577682.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/latin_english_case_en_5.5.0_3.0_1725865577682.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("latin_english_case","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("latin_english_case","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|latin_english_case| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|542.1 MB| + +## References + +https://huggingface.co/grosenthal/la_en_case \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-lenu_cz_en.md b/docs/_posts/ahmedlone127/2024-09-09-lenu_cz_en.md new file mode 100644 index 00000000000000..f528ecced2b836 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-lenu_cz_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English lenu_cz BertForSequenceClassification from Sociovestix +author: John Snow Labs +name: lenu_cz +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lenu_cz` is a English model originally trained by Sociovestix. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lenu_cz_en_5.5.0_3.0_1725900185834.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lenu_cz_en_5.5.0_3.0_1725900185834.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("lenu_cz","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("lenu_cz", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lenu_cz| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|627.9 MB| + +## References + +https://huggingface.co/Sociovestix/lenu_CZ \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-lenu_cz_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-lenu_cz_pipeline_en.md new file mode 100644 index 00000000000000..458c68c7d70665 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-lenu_cz_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English lenu_cz_pipeline pipeline BertForSequenceClassification from Sociovestix +author: John Snow Labs +name: lenu_cz_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lenu_cz_pipeline` is a English model originally trained by Sociovestix. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lenu_cz_pipeline_en_5.5.0_3.0_1725900219115.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lenu_cz_pipeline_en_5.5.0_3.0_1725900219115.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("lenu_cz_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("lenu_cz_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lenu_cz_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|627.9 MB| + +## References + +https://huggingface.co/Sociovestix/lenu_CZ + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-literary_bge_base_en.md b/docs/_posts/ahmedlone127/2024-09-09-literary_bge_base_en.md new file mode 100644 index 00000000000000..c7a9dcd9e84a71 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-literary_bge_base_en.md @@ -0,0 +1,87 @@ +--- +layout: model +title: English literary_bge_base BGEEmbeddings from crazyjeannot +author: John Snow Labs +name: literary_bge_base +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, bge] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BGEEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BGEEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`literary_bge_base` is a English model originally trained by crazyjeannot. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/literary_bge_base_en_5.5.0_3.0_1725916780919.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/literary_bge_base_en_5.5.0_3.0_1725916780919.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = BGEEmbeddings.pretrained("literary_bge_base","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + + +val embeddings = BGEEmbeddings.pretrained("literary_bge_base","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp).toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|literary_bge_base| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[bge]| +|Language:|en| +|Size:|1.2 GB| + +## References + +https://huggingface.co/crazyjeannot/literary_bge_base \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-literary_bge_base_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-literary_bge_base_pipeline_en.md new file mode 100644 index 00000000000000..0d5018efab1276 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-literary_bge_base_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English literary_bge_base_pipeline pipeline BGEEmbeddings from crazyjeannot +author: John Snow Labs +name: literary_bge_base_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BGEEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`literary_bge_base_pipeline` is a English model originally trained by crazyjeannot. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/literary_bge_base_pipeline_en_5.5.0_3.0_1725916846160.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/literary_bge_base_pipeline_en_5.5.0_3.0_1725916846160.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("literary_bge_base_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("literary_bge_base_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|literary_bge_base_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.2 GB| + +## References + +https://huggingface.co/crazyjeannot/literary_bge_base + +## Included Models + +- DocumentAssembler +- BGEEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-malicious_url_detection_en.md b/docs/_posts/ahmedlone127/2024-09-09-malicious_url_detection_en.md new file mode 100644 index 00000000000000..92c20b6913c1fb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-malicious_url_detection_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English malicious_url_detection DistilBertForSequenceClassification from kmack +author: John Snow Labs +name: malicious_url_detection +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, distilbert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`malicious_url_detection` is a English model originally trained by kmack. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/malicious_url_detection_en_5.5.0_3.0_1725873051291.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/malicious_url_detection_en_5.5.0_3.0_1725873051291.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DistilBertForSequenceClassification.pretrained("malicious_url_detection","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DistilBertForSequenceClassification.pretrained("malicious_url_detection", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|malicious_url_detection| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/kmack/malicious-url-detection \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-malicious_url_detection_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-malicious_url_detection_pipeline_en.md new file mode 100644 index 00000000000000..7c9d8152d5169f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-malicious_url_detection_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English malicious_url_detection_pipeline pipeline DistilBertForSequenceClassification from kmack +author: John Snow Labs +name: malicious_url_detection_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`malicious_url_detection_pipeline` is a English model originally trained by kmack. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/malicious_url_detection_pipeline_en_5.5.0_3.0_1725873065829.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/malicious_url_detection_pipeline_en_5.5.0_3.0_1725873065829.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("malicious_url_detection_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("malicious_url_detection_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|malicious_url_detection_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/kmack/malicious-url-detection + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-maltese_coref_english_hebrew_modern_gender_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-maltese_coref_english_hebrew_modern_gender_pipeline_en.md new file mode 100644 index 00000000000000..d39ceecfb19eef --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-maltese_coref_english_hebrew_modern_gender_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English maltese_coref_english_hebrew_modern_gender_pipeline pipeline MarianTransformer from nlphuji +author: John Snow Labs +name: maltese_coref_english_hebrew_modern_gender_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`maltese_coref_english_hebrew_modern_gender_pipeline` is a English model originally trained by nlphuji. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/maltese_coref_english_hebrew_modern_gender_pipeline_en_5.5.0_3.0_1725863270772.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/maltese_coref_english_hebrew_modern_gender_pipeline_en_5.5.0_3.0_1725863270772.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("maltese_coref_english_hebrew_modern_gender_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("maltese_coref_english_hebrew_modern_gender_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|maltese_coref_english_hebrew_modern_gender_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|545.8 MB| + +## References + +https://huggingface.co/nlphuji/mt_coref_en_he_gender + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_english_austronesian_languages_en.md b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_english_austronesian_languages_en.md new file mode 100644 index 00000000000000..7726b026946ec2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_english_austronesian_languages_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English marian_finetuned_english_austronesian_languages MarianTransformer from raphaelmerx +author: John Snow Labs +name: marian_finetuned_english_austronesian_languages +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_english_austronesian_languages` is a English model originally trained by raphaelmerx. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_english_austronesian_languages_en_5.5.0_3.0_1725913470824.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_english_austronesian_languages_en_5.5.0_3.0_1725913470824.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("marian_finetuned_english_austronesian_languages","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("marian_finetuned_english_austronesian_languages","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_english_austronesian_languages| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|510.8 MB| + +## References + +https://huggingface.co/raphaelmerx/marian-finetuned-en-map \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_english_austronesian_languages_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_english_austronesian_languages_pipeline_en.md new file mode 100644 index 00000000000000..4c324ed74e5947 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_english_austronesian_languages_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English marian_finetuned_english_austronesian_languages_pipeline pipeline MarianTransformer from raphaelmerx +author: John Snow Labs +name: marian_finetuned_english_austronesian_languages_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_english_austronesian_languages_pipeline` is a English model originally trained by raphaelmerx. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_english_austronesian_languages_pipeline_en_5.5.0_3.0_1725913501584.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_english_austronesian_languages_pipeline_en_5.5.0_3.0_1725913501584.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("marian_finetuned_english_austronesian_languages_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("marian_finetuned_english_austronesian_languages_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_english_austronesian_languages_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|511.3 MB| + +## References + +https://huggingface.co/raphaelmerx/marian-finetuned-en-map + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_chenxingphh_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_chenxingphh_pipeline_en.md new file mode 100644 index 00000000000000..206b1d726a6bc4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_chenxingphh_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_chenxingphh_pipeline pipeline MarianTransformer from chenxingphh +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_chenxingphh_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_chenxingphh_pipeline` is a English model originally trained by chenxingphh. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_chenxingphh_pipeline_en_5.5.0_3.0_1725913501419.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_chenxingphh_pipeline_en_5.5.0_3.0_1725913501419.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_chenxingphh_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_chenxingphh_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_chenxingphh_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|508.8 MB| + +## References + +https://huggingface.co/chenxingphh/marian-finetuned-kde4-en-to-fr-accelerate + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_coreyabs_db_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_coreyabs_db_pipeline_en.md new file mode 100644 index 00000000000000..8ea3180c0c218f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_coreyabs_db_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_coreyabs_db_pipeline pipeline MarianTransformer from coreyabs-db +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_coreyabs_db_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_coreyabs_db_pipeline` is a English model originally trained by coreyabs-db. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_coreyabs_db_pipeline_en_5.5.0_3.0_1725864733585.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_coreyabs_db_pipeline_en_5.5.0_3.0_1725864733585.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_coreyabs_db_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_coreyabs_db_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_coreyabs_db_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|508.7 MB| + +## References + +https://huggingface.co/coreyabs-db/marian-finetuned-kde4-en-to-fr-accelerate + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_smilemikan_en.md b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_smilemikan_en.md new file mode 100644 index 00000000000000..f90835427aabb3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_smilemikan_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_smilemikan MarianTransformer from smilemikan +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_smilemikan +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_smilemikan` is a English model originally trained by smilemikan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_smilemikan_en_5.5.0_3.0_1725914115665.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_smilemikan_en_5.5.0_3.0_1725914115665.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_smilemikan","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_smilemikan","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_french_accelerate_smilemikan| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.2 MB| + +## References + +https://huggingface.co/smilemikan/marian-finetuned-kde4-en-to-fr-accelerate \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_bubblejoe_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_bubblejoe_pipeline_en.md new file mode 100644 index 00000000000000..0d44a3d5f54674 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_bubblejoe_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_french_bubblejoe_pipeline pipeline MarianTransformer from BubbleJoe +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_french_bubblejoe_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_french_bubblejoe_pipeline` is a English model originally trained by BubbleJoe. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_bubblejoe_pipeline_en_5.5.0_3.0_1725863154485.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_bubblejoe_pipeline_en_5.5.0_3.0_1725863154485.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("marian_finetuned_kde4_english_tonga_tonga_islands_french_bubblejoe_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("marian_finetuned_kde4_english_tonga_tonga_islands_french_bubblejoe_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_french_bubblejoe_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|508.7 MB| + +## References + +https://huggingface.co/BubbleJoe/marian-finetuned-kde4-en-to-fr + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_desmondbai_en.md b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_desmondbai_en.md new file mode 100644 index 00000000000000..a06b5eeb916e97 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_desmondbai_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_french_desmondbai MarianTransformer from desmondbai +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_french_desmondbai +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_french_desmondbai` is a English model originally trained by desmondbai. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_desmondbai_en_5.5.0_3.0_1725914055428.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_desmondbai_en_5.5.0_3.0_1725914055428.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_french_desmondbai","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_french_desmondbai","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_french_desmondbai| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.2 MB| + +## References + +https://huggingface.co/desmondbai/marian-finetuned-kde4-en-to-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_fah_d_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_fah_d_pipeline_en.md new file mode 100644 index 00000000000000..dc45bf08e8d6f0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_fah_d_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_french_fah_d_pipeline pipeline MarianTransformer from Fah-d +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_french_fah_d_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_french_fah_d_pipeline` is a English model originally trained by Fah-d. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_fah_d_pipeline_en_5.5.0_3.0_1725913466183.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_fah_d_pipeline_en_5.5.0_3.0_1725913466183.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("marian_finetuned_kde4_english_tonga_tonga_islands_french_fah_d_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("marian_finetuned_kde4_english_tonga_tonga_islands_french_fah_d_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_french_fah_d_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|508.6 MB| + +## References + +https://huggingface.co/Fah-d/marian-finetuned-kde4-en-to-fr + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_ss1020_en.md b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_ss1020_en.md new file mode 100644 index 00000000000000..3685a01714c3e3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-marian_finetuned_kde4_english_tonga_tonga_islands_french_ss1020_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English marian_finetuned_kde4_english_tonga_tonga_islands_french_ss1020 MarianTransformer from SS1020 +author: John Snow Labs +name: marian_finetuned_kde4_english_tonga_tonga_islands_french_ss1020 +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marian_finetuned_kde4_english_tonga_tonga_islands_french_ss1020` is a English model originally trained by SS1020. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_ss1020_en_5.5.0_3.0_1725913396873.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marian_finetuned_kde4_english_tonga_tonga_islands_french_ss1020_en_5.5.0_3.0_1725913396873.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_french_ss1020","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("marian_finetuned_kde4_english_tonga_tonga_islands_french_ss1020","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marian_finetuned_kde4_english_tonga_tonga_islands_french_ss1020| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.1 MB| + +## References + +https://huggingface.co/SS1020/marian-finetuned-kde4-en-to-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-marianmix_english_10_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-marianmix_english_10_pipeline_en.md new file mode 100644 index 00000000000000..e90a207e268cd4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-marianmix_english_10_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English marianmix_english_10_pipeline pipeline MarianTransformer from eldor-97 +author: John Snow Labs +name: marianmix_english_10_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`marianmix_english_10_pipeline` is a English model originally trained by eldor-97. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/marianmix_english_10_pipeline_en_5.5.0_3.0_1725914637700.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/marianmix_english_10_pipeline_en_5.5.0_3.0_1725914637700.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("marianmix_english_10_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("marianmix_english_10_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|marianmix_english_10_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|724.2 MB| + +## References + +https://huggingface.co/eldor-97/MarianMix_en-10 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-math_pretrained_roberta_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-math_pretrained_roberta_pipeline_en.md new file mode 100644 index 00000000000000..00a12701a34891 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-math_pretrained_roberta_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English math_pretrained_roberta_pipeline pipeline RoBertaEmbeddings from AnReu +author: John Snow Labs +name: math_pretrained_roberta_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`math_pretrained_roberta_pipeline` is a English model originally trained by AnReu. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/math_pretrained_roberta_pipeline_en_5.5.0_3.0_1725860944056.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/math_pretrained_roberta_pipeline_en_5.5.0_3.0_1725860944056.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("math_pretrained_roberta_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("math_pretrained_roberta_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|math_pretrained_roberta_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|467.6 MB| + +## References + +https://huggingface.co/AnReu/math_pretrained_roberta + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-mdebertav3_subjectivity_arabic_ar.md b/docs/_posts/ahmedlone127/2024-09-09-mdebertav3_subjectivity_arabic_ar.md new file mode 100644 index 00000000000000..2e8b0bcd39ed71 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-mdebertav3_subjectivity_arabic_ar.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Arabic mdebertav3_subjectivity_arabic DeBertaForSequenceClassification from GroNLP +author: John Snow Labs +name: mdebertav3_subjectivity_arabic +date: 2024-09-09 +tags: [ar, open_source, onnx, sequence_classification, deberta] +task: Text Classification +language: ar +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mdebertav3_subjectivity_arabic` is a Arabic model originally trained by GroNLP. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mdebertav3_subjectivity_arabic_ar_5.5.0_3.0_1725858737600.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mdebertav3_subjectivity_arabic_ar_5.5.0_3.0_1725858737600.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DeBertaForSequenceClassification.pretrained("mdebertav3_subjectivity_arabic","ar") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DeBertaForSequenceClassification.pretrained("mdebertav3_subjectivity_arabic", "ar") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mdebertav3_subjectivity_arabic| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|ar| +|Size:|810.0 MB| + +## References + +https://huggingface.co/GroNLP/mdebertav3-subjectivity-arabic \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-mdebertav3_subjectivity_arabic_pipeline_ar.md b/docs/_posts/ahmedlone127/2024-09-09-mdebertav3_subjectivity_arabic_pipeline_ar.md new file mode 100644 index 00000000000000..5c8295f7c7f799 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-mdebertav3_subjectivity_arabic_pipeline_ar.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Arabic mdebertav3_subjectivity_arabic_pipeline pipeline DeBertaForSequenceClassification from GroNLP +author: John Snow Labs +name: mdebertav3_subjectivity_arabic_pipeline +date: 2024-09-09 +tags: [ar, open_source, pipeline, onnx] +task: Text Classification +language: ar +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mdebertav3_subjectivity_arabic_pipeline` is a Arabic model originally trained by GroNLP. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mdebertav3_subjectivity_arabic_pipeline_ar_5.5.0_3.0_1725858873746.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mdebertav3_subjectivity_arabic_pipeline_ar_5.5.0_3.0_1725858873746.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("mdebertav3_subjectivity_arabic_pipeline", lang = "ar") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("mdebertav3_subjectivity_arabic_pipeline", lang = "ar") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mdebertav3_subjectivity_arabic_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|ar| +|Size:|810.0 MB| + +## References + +https://huggingface.co/GroNLP/mdebertav3-subjectivity-arabic + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-medical_english_german_9_5_en.md b/docs/_posts/ahmedlone127/2024-09-09-medical_english_german_9_5_en.md new file mode 100644 index 00000000000000..c20b50a28ce4f0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-medical_english_german_9_5_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English medical_english_german_9_5 MarianTransformer from DogGoesBark +author: John Snow Labs +name: medical_english_german_9_5 +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`medical_english_german_9_5` is a English model originally trained by DogGoesBark. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/medical_english_german_9_5_en_5.5.0_3.0_1725913984945.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/medical_english_german_9_5_en_5.5.0_3.0_1725913984945.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("medical_english_german_9_5","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("medical_english_german_9_5","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|medical_english_german_9_5| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|499.3 MB| + +## References + +https://huggingface.co/DogGoesBark/medical_en_de_9_5 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-medical_pubmed_8_17_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-medical_pubmed_8_17_pipeline_en.md new file mode 100644 index 00000000000000..41ab7549c02dae --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-medical_pubmed_8_17_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English medical_pubmed_8_17_pipeline pipeline MarianTransformer from DogGoesBark +author: John Snow Labs +name: medical_pubmed_8_17_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`medical_pubmed_8_17_pipeline` is a English model originally trained by DogGoesBark. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/medical_pubmed_8_17_pipeline_en_5.5.0_3.0_1725863890899.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/medical_pubmed_8_17_pipeline_en_5.5.0_3.0_1725863890899.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("medical_pubmed_8_17_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("medical_pubmed_8_17_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|medical_pubmed_8_17_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|540.6 MB| + +## References + +https://huggingface.co/DogGoesBark/medical_pubmed_8_17 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-medicalquestionanswering_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-medicalquestionanswering_pipeline_en.md new file mode 100644 index 00000000000000..a7e76d561b96ac --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-medicalquestionanswering_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English medicalquestionanswering_pipeline pipeline BertForQuestionAnswering from GonzaloValdenebro +author: John Snow Labs +name: medicalquestionanswering_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`medicalquestionanswering_pipeline` is a English model originally trained by GonzaloValdenebro. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/medicalquestionanswering_pipeline_en_5.5.0_3.0_1725885308101.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/medicalquestionanswering_pipeline_en_5.5.0_3.0_1725885308101.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("medicalquestionanswering_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("medicalquestionanswering_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|medicalquestionanswering_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.0 MB| + +## References + +https://huggingface.co/GonzaloValdenebro/MedicalQuestionAnswering + +## Included Models + +- MultiDocumentAssembler +- BertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-mi_tinyroberta_cause_en.md b/docs/_posts/ahmedlone127/2024-09-09-mi_tinyroberta_cause_en.md new file mode 100644 index 00000000000000..0f60c6bf39ecb8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-mi_tinyroberta_cause_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English mi_tinyroberta_cause RoBertaForQuestionAnswering from Juncodh +author: John Snow Labs +name: mi_tinyroberta_cause +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mi_tinyroberta_cause` is a English model originally trained by Juncodh. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mi_tinyroberta_cause_en_5.5.0_3.0_1725875921682.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mi_tinyroberta_cause_en_5.5.0_3.0_1725875921682.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("mi_tinyroberta_cause","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("mi_tinyroberta_cause", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mi_tinyroberta_cause| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|306.2 MB| + +## References + +https://huggingface.co/Juncodh/mi_tinyROBERTA_cause \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-mlm_jjk_subtitle_v2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-mlm_jjk_subtitle_v2_pipeline_en.md new file mode 100644 index 00000000000000..3c1767962ac36e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-mlm_jjk_subtitle_v2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English mlm_jjk_subtitle_v2_pipeline pipeline DistilBertEmbeddings from kaiku03 +author: John Snow Labs +name: mlm_jjk_subtitle_v2_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mlm_jjk_subtitle_v2_pipeline` is a English model originally trained by kaiku03. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mlm_jjk_subtitle_v2_pipeline_en_5.5.0_3.0_1725921274709.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mlm_jjk_subtitle_v2_pipeline_en_5.5.0_3.0_1725921274709.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("mlm_jjk_subtitle_v2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("mlm_jjk_subtitle_v2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mlm_jjk_subtitle_v2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/kaiku03/MLM_JJK_SUBTITLE_V2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-mobilebert_finetuned_ner_mrm8488_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-mobilebert_finetuned_ner_mrm8488_pipeline_en.md new file mode 100644 index 00000000000000..4ad523c6c43762 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-mobilebert_finetuned_ner_mrm8488_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English mobilebert_finetuned_ner_mrm8488_pipeline pipeline BertForTokenClassification from mrm8488 +author: John Snow Labs +name: mobilebert_finetuned_ner_mrm8488_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mobilebert_finetuned_ner_mrm8488_pipeline` is a English model originally trained by mrm8488. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mobilebert_finetuned_ner_mrm8488_pipeline_en_5.5.0_3.0_1725886746770.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mobilebert_finetuned_ner_mrm8488_pipeline_en_5.5.0_3.0_1725886746770.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("mobilebert_finetuned_ner_mrm8488_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("mobilebert_finetuned_ner_mrm8488_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mobilebert_finetuned_ner_mrm8488_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|92.6 MB| + +## References + +https://huggingface.co/mrm8488/mobilebert-finetuned-ner + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-model_en.md b/docs/_posts/ahmedlone127/2024-09-09-model_en.md new file mode 100644 index 00000000000000..a03bb1ea95c9e4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-model_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English model DistilBertEmbeddings from Dinithi +author: John Snow Labs +name: model +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`model` is a English model originally trained by Dinithi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/model_en_5.5.0_3.0_1725905681806.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/model_en_5.5.0_3.0_1725905681806.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("model","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("model","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Dinithi/Model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-model_pipeline_en.md new file mode 100644 index 00000000000000..b9b50d21c5a9d6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-model_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English model_pipeline pipeline DistilBertEmbeddings from Dinithi +author: John Snow Labs +name: model_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`model_pipeline` is a English model originally trained by Dinithi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/model_pipeline_en_5.5.0_3.0_1725905694976.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/model_pipeline_en_5.5.0_3.0_1725905694976.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Dinithi/Model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-mpnet_base_natural_questions_mnrl_en.md b/docs/_posts/ahmedlone127/2024-09-09-mpnet_base_natural_questions_mnrl_en.md new file mode 100644 index 00000000000000..c523163bc35340 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-mpnet_base_natural_questions_mnrl_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English mpnet_base_natural_questions_mnrl MPNetEmbeddings from tomaarsen +author: John Snow Labs +name: mpnet_base_natural_questions_mnrl +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mpnet_base_natural_questions_mnrl` is a English model originally trained by tomaarsen. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mpnet_base_natural_questions_mnrl_en_5.5.0_3.0_1725897128393.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mpnet_base_natural_questions_mnrl_en_5.5.0_3.0_1725897128393.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("mpnet_base_natural_questions_mnrl","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("mpnet_base_natural_questions_mnrl","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mpnet_base_natural_questions_mnrl| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.6 MB| + +## References + +https://huggingface.co/tomaarsen/mpnet-base-natural-questions-mnrl \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-mpnet_qa_en.md b/docs/_posts/ahmedlone127/2024-09-09-mpnet_qa_en.md new file mode 100644 index 00000000000000..afea5485948e8d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-mpnet_qa_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English mpnet_qa MPNetEmbeddings from jamescalam +author: John Snow Labs +name: mpnet_qa +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mpnet_qa` is a English model originally trained by jamescalam. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mpnet_qa_en_5.5.0_3.0_1725874659156.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mpnet_qa_en_5.5.0_3.0_1725874659156.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("mpnet_qa","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("mpnet_qa","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mpnet_qa| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/jamescalam/mpnet-qa \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-mpnetforclassification_en.md b/docs/_posts/ahmedlone127/2024-09-09-mpnetforclassification_en.md new file mode 100644 index 00000000000000..1d4339bcbf33da --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-mpnetforclassification_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English mpnetforclassification MPNetForSequenceClassification from poooj +author: John Snow Labs +name: mpnetforclassification +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, mpnet] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mpnetforclassification` is a English model originally trained by poooj. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mpnetforclassification_en_5.5.0_3.0_1725881042658.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mpnetforclassification_en_5.5.0_3.0_1725881042658.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = MPNetForSequenceClassification.pretrained("mpnetforclassification","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = MPNetForSequenceClassification.pretrained("mpnetforclassification", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mpnetforclassification| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/poooj/MPNetForClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-mpnetv2_com_en.md b/docs/_posts/ahmedlone127/2024-09-09-mpnetv2_com_en.md new file mode 100644 index 00000000000000..fdc5c7b23fd895 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-mpnetv2_com_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English mpnetv2_com MPNetEmbeddings from Neokun004 +author: John Snow Labs +name: mpnetv2_com +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mpnetv2_com` is a English model originally trained by Neokun004. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mpnetv2_com_en_5.5.0_3.0_1725874651437.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mpnetv2_com_en_5.5.0_3.0_1725874651437.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("mpnetv2_com","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("mpnetv2_com","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mpnetv2_com| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/Neokun004/mpnetv2_com \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-multilingual_xlm_roberta_for_ner_apak_xx.md b/docs/_posts/ahmedlone127/2024-09-09-multilingual_xlm_roberta_for_ner_apak_xx.md new file mode 100644 index 00000000000000..d4a88eebc5a81e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-multilingual_xlm_roberta_for_ner_apak_xx.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Multilingual multilingual_xlm_roberta_for_ner_apak XlmRoBertaForTokenClassification from apak +author: John Snow Labs +name: multilingual_xlm_roberta_for_ner_apak +date: 2024-09-09 +tags: [xx, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`multilingual_xlm_roberta_for_ner_apak` is a Multilingual model originally trained by apak. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/multilingual_xlm_roberta_for_ner_apak_xx_5.5.0_3.0_1725919278379.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/multilingual_xlm_roberta_for_ner_apak_xx_5.5.0_3.0_1725919278379.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("multilingual_xlm_roberta_for_ner_apak","xx") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("multilingual_xlm_roberta_for_ner_apak", "xx") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|multilingual_xlm_roberta_for_ner_apak| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|xx| +|Size:|839.7 MB| + +## References + +https://huggingface.co/apak/multilingual-xlm-roberta-for-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-n_roberta_imdb_padding0model_en.md b/docs/_posts/ahmedlone127/2024-09-09-n_roberta_imdb_padding0model_en.md new file mode 100644 index 00000000000000..099eb4e00c63f1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-n_roberta_imdb_padding0model_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English n_roberta_imdb_padding0model RoBertaForSequenceClassification from Realgon +author: John Snow Labs +name: n_roberta_imdb_padding0model +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`n_roberta_imdb_padding0model` is a English model originally trained by Realgon. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/n_roberta_imdb_padding0model_en_5.5.0_3.0_1725903698412.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/n_roberta_imdb_padding0model_en_5.5.0_3.0_1725903698412.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("n_roberta_imdb_padding0model","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("n_roberta_imdb_padding0model", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|n_roberta_imdb_padding0model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|463.1 MB| + +## References + +https://huggingface.co/Realgon/N_roberta_imdb_padding0model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-n_roberta_imdb_padding0model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-n_roberta_imdb_padding0model_pipeline_en.md new file mode 100644 index 00000000000000..3d37d03b4f05b3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-n_roberta_imdb_padding0model_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English n_roberta_imdb_padding0model_pipeline pipeline RoBertaForSequenceClassification from Realgon +author: John Snow Labs +name: n_roberta_imdb_padding0model_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`n_roberta_imdb_padding0model_pipeline` is a English model originally trained by Realgon. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/n_roberta_imdb_padding0model_pipeline_en_5.5.0_3.0_1725903723083.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/n_roberta_imdb_padding0model_pipeline_en_5.5.0_3.0_1725903723083.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("n_roberta_imdb_padding0model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("n_roberta_imdb_padding0model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|n_roberta_imdb_padding0model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|463.1 MB| + +## References + +https://huggingface.co/Realgon/N_roberta_imdb_padding0model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-nepal_bhasa_model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-nepal_bhasa_model_pipeline_en.md new file mode 100644 index 00000000000000..1b20aa436972d8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-nepal_bhasa_model_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English nepal_bhasa_model_pipeline pipeline RoBertaForQuestionAnswering from marwanimroz18 +author: John Snow Labs +name: nepal_bhasa_model_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`nepal_bhasa_model_pipeline` is a English model originally trained by marwanimroz18. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/nepal_bhasa_model_pipeline_en_5.5.0_3.0_1725876361299.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/nepal_bhasa_model_pipeline_en_5.5.0_3.0_1725876361299.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("nepal_bhasa_model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("nepal_bhasa_model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|nepal_bhasa_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/marwanimroz18/new_model + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-ner_distilbert_nlpproject_en.md b/docs/_posts/ahmedlone127/2024-09-09-ner_distilbert_nlpproject_en.md new file mode 100644 index 00000000000000..7eeb10b2fe20e9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-ner_distilbert_nlpproject_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English ner_distilbert_nlpproject DistilBertForTokenClassification from nlpproject +author: John Snow Labs +name: ner_distilbert_nlpproject +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ner_distilbert_nlpproject` is a English model originally trained by nlpproject. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ner_distilbert_nlpproject_en_5.5.0_3.0_1725889773210.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ner_distilbert_nlpproject_en_5.5.0_3.0_1725889773210.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("ner_distilbert_nlpproject","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("ner_distilbert_nlpproject", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ner_distilbert_nlpproject| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/nlpproject/NER_distilBERT \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-ner_ner_random2_seed1_bernice_en.md b/docs/_posts/ahmedlone127/2024-09-09-ner_ner_random2_seed1_bernice_en.md new file mode 100644 index 00000000000000..40d5443d864434 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-ner_ner_random2_seed1_bernice_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English ner_ner_random2_seed1_bernice XlmRoBertaForTokenClassification from tweettemposhift +author: John Snow Labs +name: ner_ner_random2_seed1_bernice +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ner_ner_random2_seed1_bernice` is a English model originally trained by tweettemposhift. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ner_ner_random2_seed1_bernice_en_5.5.0_3.0_1725895235281.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ner_ner_random2_seed1_bernice_en_5.5.0_3.0_1725895235281.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("ner_ner_random2_seed1_bernice","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("ner_ner_random2_seed1_bernice", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ner_ner_random2_seed1_bernice| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|802.5 MB| + +## References + +https://huggingface.co/tweettemposhift/ner-ner_random2_seed1-bernice \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-ner_ner_random2_seed1_bernice_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-ner_ner_random2_seed1_bernice_pipeline_en.md new file mode 100644 index 00000000000000..20c26d5b1f1371 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-ner_ner_random2_seed1_bernice_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English ner_ner_random2_seed1_bernice_pipeline pipeline XlmRoBertaForTokenClassification from tweettemposhift +author: John Snow Labs +name: ner_ner_random2_seed1_bernice_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ner_ner_random2_seed1_bernice_pipeline` is a English model originally trained by tweettemposhift. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ner_ner_random2_seed1_bernice_pipeline_en_5.5.0_3.0_1725895371899.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ner_ner_random2_seed1_bernice_pipeline_en_5.5.0_3.0_1725895371899.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("ner_ner_random2_seed1_bernice_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("ner_ner_random2_seed1_bernice_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ner_ner_random2_seed1_bernice_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|802.6 MB| + +## References + +https://huggingface.co/tweettemposhift/ner-ner_random2_seed1-bernice + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-ner_ner_temporal_bernice_en.md b/docs/_posts/ahmedlone127/2024-09-09-ner_ner_temporal_bernice_en.md new file mode 100644 index 00000000000000..52c3bfee197ec2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-ner_ner_temporal_bernice_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English ner_ner_temporal_bernice XlmRoBertaForTokenClassification from tweettemposhift +author: John Snow Labs +name: ner_ner_temporal_bernice +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ner_ner_temporal_bernice` is a English model originally trained by tweettemposhift. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ner_ner_temporal_bernice_en_5.5.0_3.0_1725895573735.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ner_ner_temporal_bernice_en_5.5.0_3.0_1725895573735.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("ner_ner_temporal_bernice","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("ner_ner_temporal_bernice", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ner_ner_temporal_bernice| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|802.0 MB| + +## References + +https://huggingface.co/tweettemposhift/ner-ner_temporal-bernice \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-nerd_nerd_random3_seed2_bernice_en.md b/docs/_posts/ahmedlone127/2024-09-09-nerd_nerd_random3_seed2_bernice_en.md new file mode 100644 index 00000000000000..869d2f5617140b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-nerd_nerd_random3_seed2_bernice_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English nerd_nerd_random3_seed2_bernice XlmRoBertaForSequenceClassification from tweettemposhift +author: John Snow Labs +name: nerd_nerd_random3_seed2_bernice +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`nerd_nerd_random3_seed2_bernice` is a English model originally trained by tweettemposhift. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/nerd_nerd_random3_seed2_bernice_en_5.5.0_3.0_1725870100934.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/nerd_nerd_random3_seed2_bernice_en_5.5.0_3.0_1725870100934.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("nerd_nerd_random3_seed2_bernice","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("nerd_nerd_random3_seed2_bernice", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|nerd_nerd_random3_seed2_bernice| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|832.0 MB| + +## References + +https://huggingface.co/tweettemposhift/nerd-nerd_random3_seed2-bernice \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-news_classification_distilbert_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-news_classification_distilbert_pipeline_en.md new file mode 100644 index 00000000000000..b1b38b14030bc0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-news_classification_distilbert_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English news_classification_distilbert_pipeline pipeline DistilBertForSequenceClassification from Laurie +author: John Snow Labs +name: news_classification_distilbert_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`news_classification_distilbert_pipeline` is a English model originally trained by Laurie. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/news_classification_distilbert_pipeline_en_5.5.0_3.0_1725873174794.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/news_classification_distilbert_pipeline_en_5.5.0_3.0_1725873174794.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("news_classification_distilbert_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("news_classification_distilbert_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|news_classification_distilbert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/Laurie/news_classification_distilbert + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-none_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-none_pipeline_en.md new file mode 100644 index 00000000000000..397527a4f42a75 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-none_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English none_pipeline pipeline RoBertaForSequenceClassification from rose-e-wang +author: John Snow Labs +name: none_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`none_pipeline` is a English model originally trained by rose-e-wang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/none_pipeline_en_5.5.0_3.0_1725912113388.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/none_pipeline_en_5.5.0_3.0_1725912113388.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("none_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("none_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|none_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/rose-e-wang/None + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-nusax_senti_xlm_r_en.md b/docs/_posts/ahmedlone127/2024-09-09-nusax_senti_xlm_r_en.md new file mode 100644 index 00000000000000..f20ffa3609e419 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-nusax_senti_xlm_r_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English nusax_senti_xlm_r XlmRoBertaForSequenceClassification from Cincin-nvp +author: John Snow Labs +name: nusax_senti_xlm_r +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`nusax_senti_xlm_r` is a English model originally trained by Cincin-nvp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/nusax_senti_xlm_r_en_5.5.0_3.0_1725871498269.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/nusax_senti_xlm_r_en_5.5.0_3.0_1725871498269.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("nusax_senti_xlm_r","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("nusax_senti_xlm_r", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|nusax_senti_xlm_r| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|829.2 MB| + +## References + +https://huggingface.co/Cincin-nvp/NusaX-senti_XLM-R \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-odia_whisper_small_v3_0_pipeline_or.md b/docs/_posts/ahmedlone127/2024-09-09-odia_whisper_small_v3_0_pipeline_or.md new file mode 100644 index 00000000000000..f6b96ae6c40440 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-odia_whisper_small_v3_0_pipeline_or.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Oriya (macrolanguage) odia_whisper_small_v3_0_pipeline pipeline WhisperForCTC from Ranjit +author: John Snow Labs +name: odia_whisper_small_v3_0_pipeline +date: 2024-09-09 +tags: [or, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: or +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`odia_whisper_small_v3_0_pipeline` is a Oriya (macrolanguage) model originally trained by Ranjit. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/odia_whisper_small_v3_0_pipeline_or_5.5.0_3.0_1725847948987.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/odia_whisper_small_v3_0_pipeline_or_5.5.0_3.0_1725847948987.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("odia_whisper_small_v3_0_pipeline", lang = "or") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("odia_whisper_small_v3_0_pipeline", lang = "or") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|odia_whisper_small_v3_0_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|or| +|Size:|1.7 GB| + +## References + +https://huggingface.co/Ranjit/odia_whisper_small_v3.0 + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-olm_roberta_base_dec_2022_en.md b/docs/_posts/ahmedlone127/2024-09-09-olm_roberta_base_dec_2022_en.md new file mode 100644 index 00000000000000..d62e4da526f45a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-olm_roberta_base_dec_2022_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English olm_roberta_base_dec_2022 RoBertaEmbeddings from olm +author: John Snow Labs +name: olm_roberta_base_dec_2022 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`olm_roberta_base_dec_2022` is a English model originally trained by olm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/olm_roberta_base_dec_2022_en_5.5.0_3.0_1725860368859.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/olm_roberta_base_dec_2022_en_5.5.0_3.0_1725860368859.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("olm_roberta_base_dec_2022","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("olm_roberta_base_dec_2022","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|olm_roberta_base_dec_2022| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|465.5 MB| + +## References + +https://huggingface.co/olm/olm-roberta-base-dec-2022 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_base_wce_adaptified_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_base_wce_adaptified_en.md new file mode 100644 index 00000000000000..3f86cbefdd1b49 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_base_wce_adaptified_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_base_wce_adaptified MarianTransformer from ethansimrm +author: John Snow Labs +name: opus_base_wce_adaptified +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_base_wce_adaptified` is a English model originally trained by ethansimrm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_base_wce_adaptified_en_5.5.0_3.0_1725914355071.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_base_wce_adaptified_en_5.5.0_3.0_1725914355071.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_base_wce_adaptified","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_base_wce_adaptified","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_base_wce_adaptified| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.4 MB| + +## References + +https://huggingface.co/ethansimrm/opus_base_wce_adaptified \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_base_wce_adaptified_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_base_wce_adaptified_pipeline_en.md new file mode 100644 index 00000000000000..64a80fd3fba7e9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_base_wce_adaptified_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_base_wce_adaptified_pipeline pipeline MarianTransformer from ethansimrm +author: John Snow Labs +name: opus_base_wce_adaptified_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_base_wce_adaptified_pipeline` is a English model originally trained by ethansimrm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_base_wce_adaptified_pipeline_en_5.5.0_3.0_1725914380715.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_base_wce_adaptified_pipeline_en_5.5.0_3.0_1725914380715.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_base_wce_adaptified_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_base_wce_adaptified_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_base_wce_adaptified_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|509.0 MB| + +## References + +https://huggingface.co/ethansimrm/opus_base_wce_adaptified + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_base_wce_antagonistic_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_base_wce_antagonistic_en.md new file mode 100644 index 00000000000000..3ca20ab6b37e63 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_base_wce_antagonistic_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_base_wce_antagonistic MarianTransformer from ethansimrm +author: John Snow Labs +name: opus_base_wce_antagonistic +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_base_wce_antagonistic` is a English model originally trained by ethansimrm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_base_wce_antagonistic_en_5.5.0_3.0_1725864170788.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_base_wce_antagonistic_en_5.5.0_3.0_1725864170788.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_base_wce_antagonistic","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_base_wce_antagonistic","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_base_wce_antagonistic| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.4 MB| + +## References + +https://huggingface.co/ethansimrm/opus_base_wce_antagonistic \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_english_arabic_fine_tuned_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_english_arabic_fine_tuned_en.md new file mode 100644 index 00000000000000..bbcc04e0c082ca --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_english_arabic_fine_tuned_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_english_arabic_fine_tuned MarianTransformer from desi6ner +author: John Snow Labs +name: opus_english_arabic_fine_tuned +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_english_arabic_fine_tuned` is a English model originally trained by desi6ner. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_english_arabic_fine_tuned_en_5.5.0_3.0_1725864111169.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_english_arabic_fine_tuned_en_5.5.0_3.0_1725864111169.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_english_arabic_fine_tuned","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_english_arabic_fine_tuned","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_english_arabic_fine_tuned| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|527.7 MB| + +## References + +https://huggingface.co/desi6ner/opus-en-ar-fine-tuned \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_english_arabic_fine_tuned_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_english_arabic_fine_tuned_pipeline_en.md new file mode 100644 index 00000000000000..33771af2a0ba13 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_english_arabic_fine_tuned_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_english_arabic_fine_tuned_pipeline pipeline MarianTransformer from desi6ner +author: John Snow Labs +name: opus_english_arabic_fine_tuned_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_english_arabic_fine_tuned_pipeline` is a English model originally trained by desi6ner. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_english_arabic_fine_tuned_pipeline_en_5.5.0_3.0_1725864138736.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_english_arabic_fine_tuned_pipeline_en_5.5.0_3.0_1725864138736.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_english_arabic_fine_tuned_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_english_arabic_fine_tuned_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_english_arabic_fine_tuned_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|528.2 MB| + +## References + +https://huggingface.co/desi6ner/opus-en-ar-fine-tuned + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_jesc_japanese_en2_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_jesc_japanese_en2_en.md new file mode 100644 index 00000000000000..d48f1049f9e9b0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_jesc_japanese_en2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_jesc_japanese_en2 MarianTransformer from nomadsenshi +author: John Snow Labs +name: opus_jesc_japanese_en2 +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_jesc_japanese_en2` is a English model originally trained by nomadsenshi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_jesc_japanese_en2_en_5.5.0_3.0_1725863378869.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_jesc_japanese_en2_en_5.5.0_3.0_1725863378869.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_jesc_japanese_en2","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_jesc_japanese_en2","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_jesc_japanese_en2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|515.6 MB| + +## References + +https://huggingface.co/nomadsenshi/opus-jesc-ja-en2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_en.md new file mode 100644 index 00000000000000..19d1dc60f45dfc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs MarianTransformer from UnassumingOwl +author: John Snow Labs +name: opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs` is a English model originally trained by UnassumingOwl. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_en_5.5.0_3.0_1725863578169.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_en_5.5.0_3.0_1725863578169.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|301.2 MB| + +## References + +https://huggingface.co/UnassumingOwl/opus-mt-az-en-finetuned-npomo-en-5-epochs \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_pipeline_en.md new file mode 100644 index 00000000000000..701198b1319962 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_pipeline pipeline MarianTransformer from UnassumingOwl +author: John Snow Labs +name: opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_pipeline` is a English model originally trained by UnassumingOwl. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_pipeline_en_5.5.0_3.0_1725863593091.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_pipeline_en_5.5.0_3.0_1725863593091.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_azerbaijani_english_finetuned_npomo_english_5_epochs_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|301.8 MB| + +## References + +https://huggingface.co/UnassumingOwl/opus-mt-az-en-finetuned-npomo-en-5-epochs + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_chinese_english_finetuned_english_tonga_tonga_islands_romanian_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_chinese_english_finetuned_english_tonga_tonga_islands_romanian_pipeline_en.md new file mode 100644 index 00000000000000..f223e00b931348 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_chinese_english_finetuned_english_tonga_tonga_islands_romanian_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_chinese_english_finetuned_english_tonga_tonga_islands_romanian_pipeline pipeline MarianTransformer from ruandd +author: John Snow Labs +name: opus_maltese_chinese_english_finetuned_english_tonga_tonga_islands_romanian_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_chinese_english_finetuned_english_tonga_tonga_islands_romanian_pipeline` is a English model originally trained by ruandd. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_chinese_english_finetuned_english_tonga_tonga_islands_romanian_pipeline_en_5.5.0_3.0_1725865999335.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_chinese_english_finetuned_english_tonga_tonga_islands_romanian_pipeline_en_5.5.0_3.0_1725865999335.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_chinese_english_finetuned_english_tonga_tonga_islands_romanian_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_chinese_english_finetuned_english_tonga_tonga_islands_romanian_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_chinese_english_finetuned_english_tonga_tonga_islands_romanian_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|540.4 MB| + +## References + +https://huggingface.co/ruandd/opus-mt-zh-en-finetuned-en-to-ro + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_bkm_10e10encdec_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_bkm_10e10encdec_pipeline_en.md new file mode 100644 index 00000000000000..7b3361c84f14e9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_bkm_10e10encdec_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_english_bkm_10e10encdec_pipeline pipeline MarianTransformer from kalese +author: John Snow Labs +name: opus_maltese_english_bkm_10e10encdec_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_bkm_10e10encdec_pipeline` is a English model originally trained by kalese. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_bkm_10e10encdec_pipeline_en_5.5.0_3.0_1725890954129.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_bkm_10e10encdec_pipeline_en_5.5.0_3.0_1725890954129.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_english_bkm_10e10encdec_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_english_bkm_10e10encdec_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_bkm_10e10encdec_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|508.7 MB| + +## References + +https://huggingface.co/kalese/opus-mt-en-bkm-10e10encdec + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_chinese_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_chinese_pipeline_en.md new file mode 100644 index 00000000000000..43ab0f88b0905f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_chinese_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_english_chinese_pipeline pipeline MarianTransformer from SutouOAO +author: John Snow Labs +name: opus_maltese_english_chinese_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_chinese_pipeline` is a English model originally trained by SutouOAO. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_chinese_pipeline_en_5.5.0_3.0_1725890570199.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_chinese_pipeline_en_5.5.0_3.0_1725890570199.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_english_chinese_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_english_chinese_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_chinese_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|541.2 MB| + +## References + +https://huggingface.co/SutouOAO/opus-mt-en-zh + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_indonesian_ccmatrix_lr_3_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_indonesian_ccmatrix_lr_3_en.md new file mode 100644 index 00000000000000..4a8745b837ba79 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_indonesian_ccmatrix_lr_3_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_english_indonesian_ccmatrix_lr_3 MarianTransformer from yonathanstwn +author: John Snow Labs +name: opus_maltese_english_indonesian_ccmatrix_lr_3 +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_indonesian_ccmatrix_lr_3` is a English model originally trained by yonathanstwn. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_indonesian_ccmatrix_lr_3_en_5.5.0_3.0_1725913054976.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_indonesian_ccmatrix_lr_3_en_5.5.0_3.0_1725913054976.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_english_indonesian_ccmatrix_lr_3","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_english_indonesian_ccmatrix_lr_3","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_indonesian_ccmatrix_lr_3| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|483.0 MB| + +## References + +https://huggingface.co/yonathanstwn/opus-mt-en-id-ccmatrix-lr-3 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_indonesian_ccmatrix_lr_3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_indonesian_ccmatrix_lr_3_pipeline_en.md new file mode 100644 index 00000000000000..3994dfb5a66ac4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_indonesian_ccmatrix_lr_3_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_english_indonesian_ccmatrix_lr_3_pipeline pipeline MarianTransformer from yonathanstwn +author: John Snow Labs +name: opus_maltese_english_indonesian_ccmatrix_lr_3_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_indonesian_ccmatrix_lr_3_pipeline` is a English model originally trained by yonathanstwn. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_indonesian_ccmatrix_lr_3_pipeline_en_5.5.0_3.0_1725913078975.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_indonesian_ccmatrix_lr_3_pipeline_en_5.5.0_3.0_1725913078975.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_english_indonesian_ccmatrix_lr_3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_english_indonesian_ccmatrix_lr_3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_indonesian_ccmatrix_lr_3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|483.6 MB| + +## References + +https://huggingface.co/yonathanstwn/opus-mt-en-id-ccmatrix-lr-3 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_andreypurwanto_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_andreypurwanto_en.md new file mode 100644 index 00000000000000..4d5e29e7263bc9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_andreypurwanto_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_andreypurwanto MarianTransformer from andreypurwanto +author: John Snow Labs +name: opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_andreypurwanto +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_andreypurwanto` is a English model originally trained by andreypurwanto. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_andreypurwanto_en_5.5.0_3.0_1725913057721.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_andreypurwanto_en_5.5.0_3.0_1725913057721.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_andreypurwanto","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_andreypurwanto","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_andreypurwanto| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.6 MB| + +## References + +https://huggingface.co/andreypurwanto/opus-mt-en-ro-finetuned-en-to-ro \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_dlyfar_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_dlyfar_en.md new file mode 100644 index 00000000000000..e79a3694fb0cbf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_dlyfar_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_dlyfar MarianTransformer from dlyfar +author: John Snow Labs +name: opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_dlyfar +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_dlyfar` is a English model originally trained by dlyfar. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_dlyfar_en_5.5.0_3.0_1725891336962.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_dlyfar_en_5.5.0_3.0_1725891336962.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_dlyfar","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_dlyfar","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_dlyfar| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.5 MB| + +## References + +https://huggingface.co/dlyfar/opus-mt-en-ro-finetuned-en-to-ro \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_guhuawuli_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_guhuawuli_en.md new file mode 100644 index 00000000000000..77f8f8f1ae8a87 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_guhuawuli_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_guhuawuli MarianTransformer from guhuawuli +author: John Snow Labs +name: opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_guhuawuli +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_guhuawuli` is a English model originally trained by guhuawuli. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_guhuawuli_en_5.5.0_3.0_1725913812442.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_guhuawuli_en_5.5.0_3.0_1725913812442.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_guhuawuli","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_guhuawuli","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_guhuawuli| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.6 MB| + +## References + +https://huggingface.co/guhuawuli/opus-mt-en-ro-finetuned-en-to-ro \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_hfdsajkfd_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_hfdsajkfd_en.md new file mode 100644 index 00000000000000..9fdd24d4a27557 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_hfdsajkfd_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_hfdsajkfd MarianTransformer from hfdsajkfd +author: John Snow Labs +name: opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_hfdsajkfd +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_hfdsajkfd` is a English model originally trained by hfdsajkfd. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_hfdsajkfd_en_5.5.0_3.0_1725891095931.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_hfdsajkfd_en_5.5.0_3.0_1725891095931.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_hfdsajkfd","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_hfdsajkfd","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_romanian_finetuned_english_tonga_tonga_islands_romanian_hfdsajkfd| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.6 MB| + +## References + +https://huggingface.co/hfdsajkfd/opus-mt-en-ro-finetuned-en-to-ro \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_hch_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_hch_en.md new file mode 100644 index 00000000000000..e94901f4fb2717 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_hch_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_hch MarianTransformer from mekjr1 +author: John Snow Labs +name: opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_hch +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_hch` is a English model originally trained by mekjr1. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_hch_en_5.5.0_3.0_1725863537800.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_hch_en_5.5.0_3.0_1725863537800.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_hch","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_hch","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_hch| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|539.8 MB| + +## References + +https://huggingface.co/mekjr1/opus-mt-en-es-finetuned-es-to-hch \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_en.md new file mode 100644 index 00000000000000..d1f321d606b59f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog MarianTransformer from mekjr1 +author: John Snow Labs +name: opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog` is a English model originally trained by mekjr1. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_en_5.5.0_3.0_1725913299871.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_en_5.5.0_3.0_1725913299871.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|539.9 MB| + +## References + +https://huggingface.co/mekjr1/opus-mt-en-es-finetuned-es-to-kog \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_pipeline_en.md new file mode 100644 index 00000000000000..26b5439abd3506 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_pipeline pipeline MarianTransformer from mekjr1 +author: John Snow Labs +name: opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_pipeline` is a English model originally trained by mekjr1. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_pipeline_en_5.5.0_3.0_1725913328742.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_pipeline_en_5.5.0_3.0_1725913328742.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_spanish_finetuned_spanish_tonga_tonga_islands_kog_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|540.4 MB| + +## References + +https://huggingface.co/mekjr1/opus-mt-en-es-finetuned-es-to-kog + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_vietnamese_finetuned_eng_tonga_tonga_islands_vietnamese_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_vietnamese_finetuned_eng_tonga_tonga_islands_vietnamese_pipeline_en.md new file mode 100644 index 00000000000000..7643acaf9cf9d4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_vietnamese_finetuned_eng_tonga_tonga_islands_vietnamese_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_english_vietnamese_finetuned_eng_tonga_tonga_islands_vietnamese_pipeline pipeline MarianTransformer from callmeJ +author: John Snow Labs +name: opus_maltese_english_vietnamese_finetuned_eng_tonga_tonga_islands_vietnamese_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_vietnamese_finetuned_eng_tonga_tonga_islands_vietnamese_pipeline` is a English model originally trained by callmeJ. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_vietnamese_finetuned_eng_tonga_tonga_islands_vietnamese_pipeline_en_5.5.0_3.0_1725863089676.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_vietnamese_finetuned_eng_tonga_tonga_islands_vietnamese_pipeline_en_5.5.0_3.0_1725863089676.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_english_vietnamese_finetuned_eng_tonga_tonga_islands_vietnamese_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_english_vietnamese_finetuned_eng_tonga_tonga_islands_vietnamese_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_vietnamese_finetuned_eng_tonga_tonga_islands_vietnamese_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|475.1 MB| + +## References + +https://huggingface.co/callmeJ/opus-mt-en-vi-finetuned-eng-to-vie + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_vietnamese_full_finetuned_english_tonga_tonga_islands_vietnamese_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_vietnamese_full_finetuned_english_tonga_tonga_islands_vietnamese_en.md new file mode 100644 index 00000000000000..323524137ae99b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_english_vietnamese_full_finetuned_english_tonga_tonga_islands_vietnamese_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_english_vietnamese_full_finetuned_english_tonga_tonga_islands_vietnamese MarianTransformer from ncduy +author: John Snow Labs +name: opus_maltese_english_vietnamese_full_finetuned_english_tonga_tonga_islands_vietnamese +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_english_vietnamese_full_finetuned_english_tonga_tonga_islands_vietnamese` is a English model originally trained by ncduy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_english_vietnamese_full_finetuned_english_tonga_tonga_islands_vietnamese_en_5.5.0_3.0_1725863609663.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_english_vietnamese_full_finetuned_english_tonga_tonga_islands_vietnamese_en_5.5.0_3.0_1725863609663.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_english_vietnamese_full_finetuned_english_tonga_tonga_islands_vietnamese","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_english_vietnamese_full_finetuned_english_tonga_tonga_islands_vietnamese","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_english_vietnamese_full_finetuned_english_tonga_tonga_islands_vietnamese| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|476.7 MB| + +## References + +https://huggingface.co/ncduy/opus-mt-en-vi-full-finetuned-en-to-vi \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_italian_english_finetuned_20000_italian_tonga_tonga_islands_english_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_italian_english_finetuned_20000_italian_tonga_tonga_islands_english_pipeline_en.md new file mode 100644 index 00000000000000..d4e343b3d8ef0f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_italian_english_finetuned_20000_italian_tonga_tonga_islands_english_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_italian_english_finetuned_20000_italian_tonga_tonga_islands_english_pipeline pipeline MarianTransformer from VFiona +author: John Snow Labs +name: opus_maltese_italian_english_finetuned_20000_italian_tonga_tonga_islands_english_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_italian_english_finetuned_20000_italian_tonga_tonga_islands_english_pipeline` is a English model originally trained by VFiona. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_italian_english_finetuned_20000_italian_tonga_tonga_islands_english_pipeline_en_5.5.0_3.0_1725913874508.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_italian_english_finetuned_20000_italian_tonga_tonga_islands_english_pipeline_en_5.5.0_3.0_1725913874508.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_italian_english_finetuned_20000_italian_tonga_tonga_islands_english_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_italian_english_finetuned_20000_italian_tonga_tonga_islands_english_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_italian_english_finetuned_20000_italian_tonga_tonga_islands_english_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|625.6 MB| + +## References + +https://huggingface.co/VFiona/opus-mt-it-en-finetuned_20000-it-to-en + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_en.md new file mode 100644 index 00000000000000..f45ca2148fbb54 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa MarianTransformer from october-sd +author: John Snow Labs +name: opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa` is a English model originally trained by october-sd. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_en_5.5.0_3.0_1725914369189.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_en_5.5.0_3.0_1725914369189.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|519.1 MB| + +## References + +https://huggingface.co/october-sd/opus-mt-mr-en_mr_en_new \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_pipeline_en.md new file mode 100644 index 00000000000000..65b2870bb5be3e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_pipeline pipeline MarianTransformer from october-sd +author: John Snow Labs +name: opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_pipeline` is a English model originally trained by october-sd. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_pipeline_en_5.5.0_3.0_1725914396481.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_pipeline_en_5.5.0_3.0_1725914396481.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_marathi_marh_english_marathi_marh_english_nepal_bhasa_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|519.6 MB| + +## References + +https://huggingface.co/october-sd/opus-mt-mr-en_mr_en_new + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_russian_english_finetuned_russian_tonga_tonga_islands_english_dentikka_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_russian_english_finetuned_russian_tonga_tonga_islands_english_dentikka_pipeline_en.md new file mode 100644 index 00000000000000..3c83c6b3ed502c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_russian_english_finetuned_russian_tonga_tonga_islands_english_dentikka_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_russian_english_finetuned_russian_tonga_tonga_islands_english_dentikka_pipeline pipeline MarianTransformer from Dentikka +author: John Snow Labs +name: opus_maltese_russian_english_finetuned_russian_tonga_tonga_islands_english_dentikka_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_russian_english_finetuned_russian_tonga_tonga_islands_english_dentikka_pipeline` is a English model originally trained by Dentikka. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_russian_english_finetuned_russian_tonga_tonga_islands_english_dentikka_pipeline_en_5.5.0_3.0_1725840270574.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_russian_english_finetuned_russian_tonga_tonga_islands_english_dentikka_pipeline_en_5.5.0_3.0_1725840270574.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_russian_english_finetuned_russian_tonga_tonga_islands_english_dentikka_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_russian_english_finetuned_russian_tonga_tonga_islands_english_dentikka_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_russian_english_finetuned_russian_tonga_tonga_islands_english_dentikka_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|526.9 MB| + +## References + +https://huggingface.co/Dentikka/opus-mt-ru-en-finetuned-ru-to-en + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_en.md new file mode 100644 index 00000000000000..b6df0d85a58f20 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_maltese_walloon_english_finetuned_npomo_english_5_epochs MarianTransformer from UnassumingOwl +author: John Snow Labs +name: opus_maltese_walloon_english_finetuned_npomo_english_5_epochs +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_walloon_english_finetuned_npomo_english_5_epochs` is a English model originally trained by UnassumingOwl. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_en_5.5.0_3.0_1725913584266.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_en_5.5.0_3.0_1725913584266.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_maltese_walloon_english_finetuned_npomo_english_5_epochs","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_maltese_walloon_english_finetuned_npomo_english_5_epochs","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_walloon_english_finetuned_npomo_english_5_epochs| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|506.3 MB| + +## References + +https://huggingface.co/UnassumingOwl/opus-mt-wa-en-finetuned-npomo-en-5-epochs \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_pipeline_en.md new file mode 100644 index 00000000000000..8f2bb45e7020d8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_pipeline pipeline MarianTransformer from UnassumingOwl +author: John Snow Labs +name: opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_pipeline` is a English model originally trained by UnassumingOwl. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_pipeline_en_5.5.0_3.0_1725913612453.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_pipeline_en_5.5.0_3.0_1725913612453.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_maltese_walloon_english_finetuned_npomo_english_5_epochs_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|506.9 MB| + +## References + +https://huggingface.co/UnassumingOwl/opus-mt-wa-en-finetuned-npomo-en-5-epochs + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_wmt_finetuned_enfr_hpc_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_wmt_finetuned_enfr_hpc_en.md new file mode 100644 index 00000000000000..e7516556dbbb83 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_wmt_finetuned_enfr_hpc_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English opus_wmt_finetuned_enfr_hpc MarianTransformer from ethansimrm +author: John Snow Labs +name: opus_wmt_finetuned_enfr_hpc +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_wmt_finetuned_enfr_hpc` is a English model originally trained by ethansimrm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_wmt_finetuned_enfr_hpc_en_5.5.0_3.0_1725913583311.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_wmt_finetuned_enfr_hpc_en_5.5.0_3.0_1725913583311.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("opus_wmt_finetuned_enfr_hpc","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("opus_wmt_finetuned_enfr_hpc","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_wmt_finetuned_enfr_hpc| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.4 MB| + +## References + +https://huggingface.co/ethansimrm/opus_wmt_finetuned_enfr_hpc \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-opus_wmt_finetuned_enfr_hpc_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-opus_wmt_finetuned_enfr_hpc_pipeline_en.md new file mode 100644 index 00000000000000..d0ab6f688bb4bf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-opus_wmt_finetuned_enfr_hpc_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English opus_wmt_finetuned_enfr_hpc_pipeline pipeline MarianTransformer from ethansimrm +author: John Snow Labs +name: opus_wmt_finetuned_enfr_hpc_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`opus_wmt_finetuned_enfr_hpc_pipeline` is a English model originally trained by ethansimrm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/opus_wmt_finetuned_enfr_hpc_pipeline_en_5.5.0_3.0_1725913610282.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/opus_wmt_finetuned_enfr_hpc_pipeline_en_5.5.0_3.0_1725913610282.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("opus_wmt_finetuned_enfr_hpc_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("opus_wmt_finetuned_enfr_hpc_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|opus_wmt_finetuned_enfr_hpc_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|509.0 MB| + +## References + +https://huggingface.co/ethansimrm/opus_wmt_finetuned_enfr_hpc + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-originality_tagging8_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-originality_tagging8_pipeline_en.md new file mode 100644 index 00000000000000..f9b32f0e79141e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-originality_tagging8_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English originality_tagging8_pipeline pipeline RoBertaForSequenceClassification from EricCham8 +author: John Snow Labs +name: originality_tagging8_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`originality_tagging8_pipeline` is a English model originally trained by EricCham8. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/originality_tagging8_pipeline_en_5.5.0_3.0_1725903378052.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/originality_tagging8_pipeline_en_5.5.0_3.0_1725903378052.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("originality_tagging8_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("originality_tagging8_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|originality_tagging8_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|449.5 MB| + +## References + +https://huggingface.co/EricCham8/Originality_tagging8 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-othe_3_en.md b/docs/_posts/ahmedlone127/2024-09-09-othe_3_en.md new file mode 100644 index 00000000000000..2eca3073cf3b30 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-othe_3_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English othe_3 RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: othe_3 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`othe_3` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/othe_3_en_5.5.0_3.0_1725919889748.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/othe_3_en_5.5.0_3.0_1725919889748.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("othe_3","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("othe_3", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|othe_3| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Othe_3 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-othe_3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-othe_3_pipeline_en.md new file mode 100644 index 00000000000000..5271142f94e17e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-othe_3_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English othe_3_pipeline pipeline RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: othe_3_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`othe_3_pipeline` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/othe_3_pipeline_en_5.5.0_3.0_1725919912567.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/othe_3_pipeline_en_5.5.0_3.0_1725919912567.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("othe_3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("othe_3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|othe_3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Othe_3 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-othe_4_en.md b/docs/_posts/ahmedlone127/2024-09-09-othe_4_en.md new file mode 100644 index 00000000000000..8a0a05e4eddb05 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-othe_4_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English othe_4 RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: othe_4 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`othe_4` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/othe_4_en_5.5.0_3.0_1725920756509.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/othe_4_en_5.5.0_3.0_1725920756509.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("othe_4","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("othe_4", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|othe_4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Othe_4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-otis_official_spam_model_en.md b/docs/_posts/ahmedlone127/2024-09-09-otis_official_spam_model_en.md new file mode 100644 index 00000000000000..46bac4925d61fb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-otis_official_spam_model_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English otis_official_spam_model BertForSequenceClassification from Titeiiko +author: John Snow Labs +name: otis_official_spam_model +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`otis_official_spam_model` is a English model originally trained by Titeiiko. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/otis_official_spam_model_en_5.5.0_3.0_1725900731070.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/otis_official_spam_model_en_5.5.0_3.0_1725900731070.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("otis_official_spam_model","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("otis_official_spam_model", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|otis_official_spam_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|16.7 MB| + +## References + +https://huggingface.co/Titeiiko/OTIS-Official-Spam-Model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-paraphrase_mpnet_base_v2_sst2_4samps_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-paraphrase_mpnet_base_v2_sst2_4samps_pipeline_en.md new file mode 100644 index 00000000000000..c4f2e05579a274 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-paraphrase_mpnet_base_v2_sst2_4samps_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English paraphrase_mpnet_base_v2_sst2_4samps_pipeline pipeline MPNetEmbeddings from orenpereg +author: John Snow Labs +name: paraphrase_mpnet_base_v2_sst2_4samps_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`paraphrase_mpnet_base_v2_sst2_4samps_pipeline` is a English model originally trained by orenpereg. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/paraphrase_mpnet_base_v2_sst2_4samps_pipeline_en_5.5.0_3.0_1725896695345.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/paraphrase_mpnet_base_v2_sst2_4samps_pipeline_en_5.5.0_3.0_1725896695345.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("paraphrase_mpnet_base_v2_sst2_4samps_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("paraphrase_mpnet_base_v2_sst2_4samps_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|paraphrase_mpnet_base_v2_sst2_4samps_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/orenpereg/paraphrase-mpnet-base-v2_sst2_4samps + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-passage_ranker_chocolate_en.md b/docs/_posts/ahmedlone127/2024-09-09-passage_ranker_chocolate_en.md new file mode 100644 index 00000000000000..9e7d2c8f6de4f5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-passage_ranker_chocolate_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English passage_ranker_chocolate BertForSequenceClassification from sinequa +author: John Snow Labs +name: passage_ranker_chocolate +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`passage_ranker_chocolate` is a English model originally trained by sinequa. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/passage_ranker_chocolate_en_5.5.0_3.0_1725899970448.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/passage_ranker_chocolate_en_5.5.0_3.0_1725899970448.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("passage_ranker_chocolate","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("passage_ranker_chocolate", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|passage_ranker_chocolate| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|84.7 MB| + +## References + +https://huggingface.co/sinequa/passage-ranker.chocolate \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-patentbert_en.md b/docs/_posts/ahmedlone127/2024-09-09-patentbert_en.md new file mode 100644 index 00000000000000..8e3f5e86e30095 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-patentbert_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English patentbert BertEmbeddings from dheerajpai +author: John Snow Labs +name: patentbert +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, bert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`patentbert` is a English model originally trained by dheerajpai. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/patentbert_en_5.5.0_3.0_1725881672567.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/patentbert_en_5.5.0_3.0_1725881672567.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = BertEmbeddings.pretrained("patentbert","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = BertEmbeddings.pretrained("patentbert","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|patentbert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[bert]| +|Language:|en| +|Size:|75.3 MB| + +## References + +https://huggingface.co/dheerajpai/patentbert \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-patentbert_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-patentbert_pipeline_en.md new file mode 100644 index 00000000000000..f20521394de378 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-patentbert_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English patentbert_pipeline pipeline BertEmbeddings from dheerajpai +author: John Snow Labs +name: patentbert_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`patentbert_pipeline` is a English model originally trained by dheerajpai. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/patentbert_pipeline_en_5.5.0_3.0_1725881676350.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/patentbert_pipeline_en_5.5.0_3.0_1725881676350.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("patentbert_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("patentbert_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|patentbert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|75.4 MB| + +## References + +https://huggingface.co/dheerajpai/patentbert + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-petbert_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-petbert_pipeline_en.md new file mode 100644 index 00000000000000..4453b6451009c7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-petbert_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English petbert_pipeline pipeline BertEmbeddings from SAVSNET +author: John Snow Labs +name: petbert_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`petbert_pipeline` is a English model originally trained by SAVSNET. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/petbert_pipeline_en_5.5.0_3.0_1725881944598.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/petbert_pipeline_en_5.5.0_3.0_1725881944598.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("petbert_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("petbert_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|petbert_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|403.2 MB| + +## References + +https://huggingface.co/SAVSNET/PetBERT + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-platzi_distilroberta_base_mrpc_glue_jonathan_castillo_en.md b/docs/_posts/ahmedlone127/2024-09-09-platzi_distilroberta_base_mrpc_glue_jonathan_castillo_en.md new file mode 100644 index 00000000000000..9d50b2ac74e7f7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-platzi_distilroberta_base_mrpc_glue_jonathan_castillo_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English platzi_distilroberta_base_mrpc_glue_jonathan_castillo RoBertaForSequenceClassification from platzi +author: John Snow Labs +name: platzi_distilroberta_base_mrpc_glue_jonathan_castillo +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`platzi_distilroberta_base_mrpc_glue_jonathan_castillo` is a English model originally trained by platzi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/platzi_distilroberta_base_mrpc_glue_jonathan_castillo_en_5.5.0_3.0_1725903687240.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/platzi_distilroberta_base_mrpc_glue_jonathan_castillo_en_5.5.0_3.0_1725903687240.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("platzi_distilroberta_base_mrpc_glue_jonathan_castillo","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("platzi_distilroberta_base_mrpc_glue_jonathan_castillo", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|platzi_distilroberta_base_mrpc_glue_jonathan_castillo| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|308.6 MB| + +## References + +https://huggingface.co/platzi/platzi-distilroberta-base-mrpc-glue-Jonathan-Castillo \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-portuguese_xlm_r_falsefalse_0_2_best_en.md b/docs/_posts/ahmedlone127/2024-09-09-portuguese_xlm_r_falsefalse_0_2_best_en.md new file mode 100644 index 00000000000000..f3de192cb8c306 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-portuguese_xlm_r_falsefalse_0_2_best_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English portuguese_xlm_r_falsefalse_0_2_best XlmRoBertaForSequenceClassification from harish +author: John Snow Labs +name: portuguese_xlm_r_falsefalse_0_2_best +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`portuguese_xlm_r_falsefalse_0_2_best` is a English model originally trained by harish. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/portuguese_xlm_r_falsefalse_0_2_best_en_5.5.0_3.0_1725906946069.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/portuguese_xlm_r_falsefalse_0_2_best_en_5.5.0_3.0_1725906946069.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("portuguese_xlm_r_falsefalse_0_2_best","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("portuguese_xlm_r_falsefalse_0_2_best", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|portuguese_xlm_r_falsefalse_0_2_best| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|780.6 MB| + +## References + +https://huggingface.co/harish/PT-XLM_R-FalseFalse-0_2_BEST \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-portuguese_xlm_r_falsefalse_0_2_best_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-portuguese_xlm_r_falsefalse_0_2_best_pipeline_en.md new file mode 100644 index 00000000000000..1a4482c2b4557f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-portuguese_xlm_r_falsefalse_0_2_best_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English portuguese_xlm_r_falsefalse_0_2_best_pipeline pipeline XlmRoBertaForSequenceClassification from harish +author: John Snow Labs +name: portuguese_xlm_r_falsefalse_0_2_best_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`portuguese_xlm_r_falsefalse_0_2_best_pipeline` is a English model originally trained by harish. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/portuguese_xlm_r_falsefalse_0_2_best_pipeline_en_5.5.0_3.0_1725907085220.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/portuguese_xlm_r_falsefalse_0_2_best_pipeline_en_5.5.0_3.0_1725907085220.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("portuguese_xlm_r_falsefalse_0_2_best_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("portuguese_xlm_r_falsefalse_0_2_best_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|portuguese_xlm_r_falsefalse_0_2_best_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|780.6 MB| + +## References + +https://huggingface.co/harish/PT-XLM_R-FalseFalse-0_2_BEST + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-prev_lab1_finetuning_en.md b/docs/_posts/ahmedlone127/2024-09-09-prev_lab1_finetuning_en.md new file mode 100644 index 00000000000000..377ce2f5821ba1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-prev_lab1_finetuning_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English prev_lab1_finetuning MarianTransformer from minhngca +author: John Snow Labs +name: prev_lab1_finetuning +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`prev_lab1_finetuning` is a English model originally trained by minhngca. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/prev_lab1_finetuning_en_5.5.0_3.0_1725891981878.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/prev_lab1_finetuning_en_5.5.0_3.0_1725891981878.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("prev_lab1_finetuning","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("prev_lab1_finetuning","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|prev_lab1_finetuning| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|508.1 MB| + +## References + +https://huggingface.co/minhngca/prev_lab1_finetuning \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-q2d_origin_re_5_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-q2d_origin_re_5_pipeline_en.md new file mode 100644 index 00000000000000..c11be67e82a684 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-q2d_origin_re_5_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English q2d_origin_re_5_pipeline pipeline MPNetEmbeddings from ingeol +author: John Snow Labs +name: q2d_origin_re_5_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`q2d_origin_re_5_pipeline` is a English model originally trained by ingeol. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/q2d_origin_re_5_pipeline_en_5.5.0_3.0_1725897119691.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/q2d_origin_re_5_pipeline_en_5.5.0_3.0_1725897119691.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("q2d_origin_re_5_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("q2d_origin_re_5_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|q2d_origin_re_5_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/ingeol/q2d_origin_re_5 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-q2e_ep3_35_en.md b/docs/_posts/ahmedlone127/2024-09-09-q2e_ep3_35_en.md new file mode 100644 index 00000000000000..2861c140e4fe05 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-q2e_ep3_35_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English q2e_ep3_35 MPNetEmbeddings from ingeol +author: John Snow Labs +name: q2e_ep3_35 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`q2e_ep3_35` is a English model originally trained by ingeol. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/q2e_ep3_35_en_5.5.0_3.0_1725874597305.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/q2e_ep3_35_en_5.5.0_3.0_1725874597305.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("q2e_ep3_35","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("q2e_ep3_35","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|q2e_ep3_35| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/ingeol/q2e_ep3_35 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-qa_callback_en.md b/docs/_posts/ahmedlone127/2024-09-09-qa_callback_en.md new file mode 100644 index 00000000000000..8dbb15bb735348 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-qa_callback_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English qa_callback DistilBertForQuestionAnswering from RachelLe +author: John Snow Labs +name: qa_callback +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`qa_callback` is a English model originally trained by RachelLe. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/qa_callback_en_5.5.0_3.0_1725876952866.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/qa_callback_en_5.5.0_3.0_1725876952866.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("qa_callback","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("qa_callback", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|qa_callback| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/RachelLe/qa_callback \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-question_answering_model_vishnun0027_en.md b/docs/_posts/ahmedlone127/2024-09-09-question_answering_model_vishnun0027_en.md new file mode 100644 index 00000000000000..3b484a32eb55a2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-question_answering_model_vishnun0027_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English question_answering_model_vishnun0027 DistilBertForQuestionAnswering from vishnun0027 +author: John Snow Labs +name: question_answering_model_vishnun0027 +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`question_answering_model_vishnun0027` is a English model originally trained by vishnun0027. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/question_answering_model_vishnun0027_en_5.5.0_3.0_1725877076427.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/question_answering_model_vishnun0027_en_5.5.0_3.0_1725877076427.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("question_answering_model_vishnun0027","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("question_answering_model_vishnun0027", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|question_answering_model_vishnun0027| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/vishnun0027/Question_answering_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-question_anwsering_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-question_anwsering_pipeline_en.md new file mode 100644 index 00000000000000..f5671165d3d1ce --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-question_anwsering_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English question_anwsering_pipeline pipeline DistilBertForQuestionAnswering from Yeji-Seong +author: John Snow Labs +name: question_anwsering_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`question_anwsering_pipeline` is a English model originally trained by Yeji-Seong. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/question_anwsering_pipeline_en_5.5.0_3.0_1725868875569.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/question_anwsering_pipeline_en_5.5.0_3.0_1725868875569.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("question_anwsering_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("question_anwsering_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|question_anwsering_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Yeji-Seong/question-anwsering + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_en.md b/docs/_posts/ahmedlone127/2024-09-09-re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_en.md new file mode 100644 index 00000000000000..da249d22e46ff5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned XlmRoBertaForTokenClassification from ajtamayoh +author: John Snow Labs +name: re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned` is a English model originally trained by ajtamayoh. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_en_5.5.0_3.0_1725894618973.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_en_5.5.0_3.0_1725894618973.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|847.7 MB| + +## References + +https://huggingface.co/ajtamayoh/RE_NegREF_NSD_Nubes_Training_Test_dataset_xlm_RoBERTa_base_fine_tuned \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_pipeline_en.md new file mode 100644 index 00000000000000..7e916a67bb971a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_pipeline pipeline XlmRoBertaForTokenClassification from ajtamayoh +author: John Snow Labs +name: re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_pipeline` is a English model originally trained by ajtamayoh. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_pipeline_en_5.5.0_3.0_1725894676172.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_pipeline_en_5.5.0_3.0_1725894676172.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|re_negref_nsd_nubes_training_test_dataset_xlm_roberta_base_fine_tuned_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|847.7 MB| + +## References + +https://huggingface.co/ajtamayoh/RE_NegREF_NSD_Nubes_Training_Test_dataset_xlm_RoBERTa_base_fine_tuned + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-regr_1_en.md b/docs/_posts/ahmedlone127/2024-09-09-regr_1_en.md new file mode 100644 index 00000000000000..5d8d3a1187c450 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-regr_1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English regr_1 RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: regr_1 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`regr_1` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/regr_1_en_5.5.0_3.0_1725904195314.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/regr_1_en_5.5.0_3.0_1725904195314.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("regr_1","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("regr_1", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|regr_1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Regr_1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-remote_sensing_distilbert_cased_en.md b/docs/_posts/ahmedlone127/2024-09-09-remote_sensing_distilbert_cased_en.md new file mode 100644 index 00000000000000..a92078c3bf2275 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-remote_sensing_distilbert_cased_en.md @@ -0,0 +1,92 @@ +--- +layout: model +title: English remote_sensing_distilbert_cased DistilBertEmbeddings from Chramer +author: John Snow Labs +name: remote_sensing_distilbert_cased +date: 2024-09-09 +tags: [distilbert, en, open_source, fill_mask, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`remote_sensing_distilbert_cased` is a English model originally trained by Chramer. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/remote_sensing_distilbert_cased_en_5.5.0_3.0_1725909549810.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/remote_sensing_distilbert_cased_en_5.5.0_3.0_1725909549810.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +document_assembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("documents") + + +embeddings =DistilBertEmbeddings.pretrained("remote_sensing_distilbert_cased","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([document_assembler, embeddings]) + +pipelineModel = pipeline.fit(data) + +pipelineDF = pipelineModel.transform(data) +``` +```scala +val document_assembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("embeddings") + +val embeddings = DistilBertEmbeddings + .pretrained("remote_sensing_distilbert_cased", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(document_assembler, embeddings)) + +val pipelineModel = pipeline.fit(data) + +val pipelineDF = pipelineModel.transform(data) +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|remote_sensing_distilbert_cased| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|243.7 MB| + +## References + +References + +https://huggingface.co/Chramer/remote-sensing-distilbert-cased \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-results_azyren_en.md b/docs/_posts/ahmedlone127/2024-09-09-results_azyren_en.md new file mode 100644 index 00000000000000..dd8bd2a89a14c1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-results_azyren_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English results_azyren DistilBertForQuestionAnswering from Azyren +author: John Snow Labs +name: results_azyren +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`results_azyren` is a English model originally trained by Azyren. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/results_azyren_en_5.5.0_3.0_1725877271919.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/results_azyren_en_5.5.0_3.0_1725877271919.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("results_azyren","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("results_azyren", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|results_azyren| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Azyren/results \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-rewardmodel_robertabase_rajueee_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-rewardmodel_robertabase_rajueee_pipeline_en.md new file mode 100644 index 00000000000000..ab58c566852ff1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-rewardmodel_robertabase_rajueee_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English rewardmodel_robertabase_rajueee_pipeline pipeline RoBertaForSequenceClassification from RajuEEE +author: John Snow Labs +name: rewardmodel_robertabase_rajueee_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`rewardmodel_robertabase_rajueee_pipeline` is a English model originally trained by RajuEEE. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/rewardmodel_robertabase_rajueee_pipeline_en_5.5.0_3.0_1725904438891.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/rewardmodel_robertabase_rajueee_pipeline_en_5.5.0_3.0_1725904438891.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("rewardmodel_robertabase_rajueee_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("rewardmodel_robertabase_rajueee_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|rewardmodel_robertabase_rajueee_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|433.2 MB| + +## References + +https://huggingface.co/RajuEEE/RewardModel_RobertaBase + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-rg_4k_fake_signatures_en.md b/docs/_posts/ahmedlone127/2024-09-09-rg_4k_fake_signatures_en.md new file mode 100644 index 00000000000000..e569b9a5f9d78a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-rg_4k_fake_signatures_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English rg_4k_fake_signatures DistilBertForTokenClassification from chilliadgl +author: John Snow Labs +name: rg_4k_fake_signatures +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, distilbert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`rg_4k_fake_signatures` is a English model originally trained by chilliadgl. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/rg_4k_fake_signatures_en_5.5.0_3.0_1725889894185.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/rg_4k_fake_signatures_en_5.5.0_3.0_1725889894185.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = DistilBertForTokenClassification.pretrained("rg_4k_fake_signatures","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = DistilBertForTokenClassification.pretrained("rg_4k_fake_signatures", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|rg_4k_fake_signatures| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/chilliadgl/RG_4k_fake_signatures \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-robbert_dutch_base_squad_dutch_en.md b/docs/_posts/ahmedlone127/2024-09-09-robbert_dutch_base_squad_dutch_en.md new file mode 100644 index 00000000000000..cd0c6cafbcc403 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-robbert_dutch_base_squad_dutch_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English robbert_dutch_base_squad_dutch RoBertaForQuestionAnswering from Nadav +author: John Snow Labs +name: robbert_dutch_base_squad_dutch +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`robbert_dutch_base_squad_dutch` is a English model originally trained by Nadav. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/robbert_dutch_base_squad_dutch_en_5.5.0_3.0_1725876233778.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/robbert_dutch_base_squad_dutch_en_5.5.0_3.0_1725876233778.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("robbert_dutch_base_squad_dutch","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("robbert_dutch_base_squad_dutch", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|robbert_dutch_base_squad_dutch| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|435.7 MB| + +## References + +https://huggingface.co/Nadav/robbert-dutch-base-squad-nl \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-robbert_dutch_cola_hylkebr_nl.md b/docs/_posts/ahmedlone127/2024-09-09-robbert_dutch_cola_hylkebr_nl.md new file mode 100644 index 00000000000000..8e11d94e8fd147 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-robbert_dutch_cola_hylkebr_nl.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Dutch, Flemish robbert_dutch_cola_hylkebr RoBertaForSequenceClassification from HylkeBr +author: John Snow Labs +name: robbert_dutch_cola_hylkebr +date: 2024-09-09 +tags: [nl, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: nl +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`robbert_dutch_cola_hylkebr` is a Dutch, Flemish model originally trained by HylkeBr. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/robbert_dutch_cola_hylkebr_nl_5.5.0_3.0_1725911532935.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/robbert_dutch_cola_hylkebr_nl_5.5.0_3.0_1725911532935.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("robbert_dutch_cola_hylkebr","nl") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("robbert_dutch_cola_hylkebr", "nl") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|robbert_dutch_cola_hylkebr| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|nl| +|Size:|437.9 MB| + +## References + +https://huggingface.co/HylkeBr/robbert_dutch-cola \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-robbert_dutch_cola_hylkebr_pipeline_nl.md b/docs/_posts/ahmedlone127/2024-09-09-robbert_dutch_cola_hylkebr_pipeline_nl.md new file mode 100644 index 00000000000000..2386b2e12a3210 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-robbert_dutch_cola_hylkebr_pipeline_nl.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Dutch, Flemish robbert_dutch_cola_hylkebr_pipeline pipeline RoBertaForSequenceClassification from HylkeBr +author: John Snow Labs +name: robbert_dutch_cola_hylkebr_pipeline +date: 2024-09-09 +tags: [nl, open_source, pipeline, onnx] +task: Text Classification +language: nl +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`robbert_dutch_cola_hylkebr_pipeline` is a Dutch, Flemish model originally trained by HylkeBr. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/robbert_dutch_cola_hylkebr_pipeline_nl_5.5.0_3.0_1725911554756.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/robbert_dutch_cola_hylkebr_pipeline_nl_5.5.0_3.0_1725911554756.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("robbert_dutch_cola_hylkebr_pipeline", lang = "nl") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("robbert_dutch_cola_hylkebr_pipeline", lang = "nl") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|robbert_dutch_cola_hylkebr_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|nl| +|Size:|438.0 MB| + +## References + +https://huggingface.co/HylkeBr/robbert_dutch-cola + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_10_5eps_seed188_test_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_10_5eps_seed188_test_en.md new file mode 100644 index 00000000000000..762eb043a539d1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_10_5eps_seed188_test_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_10_5eps_seed188_test RoBertaForSequenceClassification from custeau +author: John Snow Labs +name: roberta_10_5eps_seed188_test +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_10_5eps_seed188_test` is a English model originally trained by custeau. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_10_5eps_seed188_test_en_5.5.0_3.0_1725903512922.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_10_5eps_seed188_test_en_5.5.0_3.0_1725903512922.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_10_5eps_seed188_test","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_10_5eps_seed188_test", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_10_5eps_seed188_test| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.3 MB| + +## References + +https://huggingface.co/custeau/roberta_10_5eps_seed188_test \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_1b_1_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_1b_1_en.md new file mode 100644 index 00000000000000..cb7041772371c6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_1b_1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_1b_1 RoBertaEmbeddings from nyu-mll +author: John Snow Labs +name: roberta_base_1b_1 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_1b_1` is a English model originally trained by nyu-mll. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_1b_1_en_5.5.0_3.0_1725910259634.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_1b_1_en_5.5.0_3.0_1725910259634.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("roberta_base_1b_1","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("roberta_base_1b_1","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_1b_1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|295.8 MB| + +## References + +https://huggingface.co/nyu-mll/roberta-base-1B-1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_1b_1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_1b_1_pipeline_en.md new file mode 100644 index 00000000000000..72a7f1e8faa31f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_1b_1_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_1b_1_pipeline pipeline RoBertaEmbeddings from nyu-mll +author: John Snow Labs +name: roberta_base_1b_1_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_1b_1_pipeline` is a English model originally trained by nyu-mll. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_1b_1_pipeline_en_5.5.0_3.0_1725910343329.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_1b_1_pipeline_en_5.5.0_3.0_1725910343329.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_1b_1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_1b_1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_1b_1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|295.9 MB| + +## References + +https://huggingface.co/nyu-mll/roberta-base-1B-1 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_ag_news2_jupp2_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_ag_news2_jupp2_en.md new file mode 100644 index 00000000000000..ecdcc5e73742e7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_ag_news2_jupp2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_ag_news2_jupp2 RoBertaForSequenceClassification from Jupp2 +author: John Snow Labs +name: roberta_base_ag_news2_jupp2 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_ag_news2_jupp2` is a English model originally trained by Jupp2. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_ag_news2_jupp2_en_5.5.0_3.0_1725902804522.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_ag_news2_jupp2_en_5.5.0_3.0_1725902804522.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_ag_news2_jupp2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_ag_news2_jupp2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_ag_news2_jupp2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|445.7 MB| + +## References + +https://huggingface.co/Jupp2/roberta-base_ag_news2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_bne_autext_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_bne_autext_en.md new file mode 100644 index 00000000000000..054c0a02f74d48 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_bne_autext_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_bne_autext RoBertaForSequenceClassification from jorgefg03 +author: John Snow Labs +name: roberta_base_bne_autext +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_bne_autext` is a English model originally trained by jorgefg03. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_bne_autext_en_5.5.0_3.0_1725903598650.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_bne_autext_en_5.5.0_3.0_1725903598650.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_bne_autext","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_bne_autext", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_bne_autext| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|449.5 MB| + +## References + +https://huggingface.co/jorgefg03/roberta-base-bne-autext \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_bne_autext_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_bne_autext_pipeline_en.md new file mode 100644 index 00000000000000..cf25232292eb57 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_bne_autext_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_bne_autext_pipeline pipeline RoBertaForSequenceClassification from jorgefg03 +author: John Snow Labs +name: roberta_base_bne_autext_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_bne_autext_pipeline` is a English model originally trained by jorgefg03. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_bne_autext_pipeline_en_5.5.0_3.0_1725903627240.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_bne_autext_pipeline_en_5.5.0_3.0_1725903627240.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_bne_autext_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_bne_autext_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_bne_autext_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|449.6 MB| + +## References + +https://huggingface.co/jorgefg03/roberta-base-bne-autext + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_bne_finetuned_amazon_reviews_multi_golivaresm_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_bne_finetuned_amazon_reviews_multi_golivaresm_en.md new file mode 100644 index 00000000000000..3d40b0d215affd --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_bne_finetuned_amazon_reviews_multi_golivaresm_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_bne_finetuned_amazon_reviews_multi_golivaresm RoBertaForSequenceClassification from golivaresm +author: John Snow Labs +name: roberta_base_bne_finetuned_amazon_reviews_multi_golivaresm +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_bne_finetuned_amazon_reviews_multi_golivaresm` is a English model originally trained by golivaresm. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_bne_finetuned_amazon_reviews_multi_golivaresm_en_5.5.0_3.0_1725904098308.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_bne_finetuned_amazon_reviews_multi_golivaresm_en_5.5.0_3.0_1725904098308.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_bne_finetuned_amazon_reviews_multi_golivaresm","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_bne_finetuned_amazon_reviews_multi_golivaresm", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_bne_finetuned_amazon_reviews_multi_golivaresm| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|446.7 MB| + +## References + +https://huggingface.co/golivaresm/roberta-base-bne-finetuned-amazon_reviews_multi \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_bne_finetuned_amazon_reviews_multi_samael98_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_bne_finetuned_amazon_reviews_multi_samael98_en.md new file mode 100644 index 00000000000000..0f000fc471e67f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_bne_finetuned_amazon_reviews_multi_samael98_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_bne_finetuned_amazon_reviews_multi_samael98 RoBertaForSequenceClassification from Samael98 +author: John Snow Labs +name: roberta_base_bne_finetuned_amazon_reviews_multi_samael98 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_bne_finetuned_amazon_reviews_multi_samael98` is a English model originally trained by Samael98. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_bne_finetuned_amazon_reviews_multi_samael98_en_5.5.0_3.0_1725920089178.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_bne_finetuned_amazon_reviews_multi_samael98_en_5.5.0_3.0_1725920089178.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_bne_finetuned_amazon_reviews_multi_samael98","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_bne_finetuned_amazon_reviews_multi_samael98", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_bne_finetuned_amazon_reviews_multi_samael98| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|446.8 MB| + +## References + +https://huggingface.co/Samael98/roberta-base-bne-finetuned-amazon_reviews_multi \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_catalan_v2_pipeline_ca.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_catalan_v2_pipeline_ca.md new file mode 100644 index 00000000000000..c4494d71fc77c3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_catalan_v2_pipeline_ca.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Catalan, Valencian roberta_base_catalan_v2_pipeline pipeline RoBertaEmbeddings from projecte-aina +author: John Snow Labs +name: roberta_base_catalan_v2_pipeline +date: 2024-09-09 +tags: [ca, open_source, pipeline, onnx] +task: Embeddings +language: ca +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_catalan_v2_pipeline` is a Catalan, Valencian model originally trained by projecte-aina. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_catalan_v2_pipeline_ca_5.5.0_3.0_1725861076101.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_catalan_v2_pipeline_ca_5.5.0_3.0_1725861076101.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_catalan_v2_pipeline", lang = "ca") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_catalan_v2_pipeline", lang = "ca") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_catalan_v2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|ca| +|Size:|295.1 MB| + +## References + +https://huggingface.co/projecte-aina/roberta-base-ca-v2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_corener_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_corener_pipeline_en.md new file mode 100644 index 00000000000000..09ad7295820a44 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_corener_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_corener_pipeline pipeline RoBertaEmbeddings from aiola +author: John Snow Labs +name: roberta_base_corener_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_corener_pipeline` is a English model originally trained by aiola. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_corener_pipeline_en_5.5.0_3.0_1725910022715.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_corener_pipeline_en_5.5.0_3.0_1725910022715.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_corener_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_corener_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_corener_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|464.2 MB| + +## References + +https://huggingface.co/aiola/roberta-base-corener + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_cuad_finetuned_model_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_cuad_finetuned_model_en.md new file mode 100644 index 00000000000000..27c67c17ddd47a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_cuad_finetuned_model_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English roberta_base_cuad_finetuned_model RoBertaForQuestionAnswering from vpadman1 +author: John Snow Labs +name: roberta_base_cuad_finetuned_model +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_cuad_finetuned_model` is a English model originally trained by vpadman1. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_cuad_finetuned_model_en_5.5.0_3.0_1725875778356.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_cuad_finetuned_model_en_5.5.0_3.0_1725875778356.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("roberta_base_cuad_finetuned_model","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("roberta_base_cuad_finetuned_model", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_cuad_finetuned_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|454.1 MB| + +## References + +https://huggingface.co/vpadman1/Roberta_base_cuad_finetuned_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_dofla_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_dofla_en.md new file mode 100644 index 00000000000000..ac41ff30785f49 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_dofla_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English roberta_base_dofla RoBertaForQuestionAnswering from Dofla +author: John Snow Labs +name: roberta_base_dofla +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_dofla` is a English model originally trained by Dofla. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_dofla_en_5.5.0_3.0_1725867025911.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_dofla_en_5.5.0_3.0_1725867025911.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("roberta_base_dofla","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("roberta_base_dofla", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_dofla| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|461.7 MB| + +## References + +https://huggingface.co/Dofla/roberta-base \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_ewc_15_then_16_fakenews_2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_ewc_15_then_16_fakenews_2_pipeline_en.md new file mode 100644 index 00000000000000..5d952b51f9f79d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_ewc_15_then_16_fakenews_2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_ewc_15_then_16_fakenews_2_pipeline pipeline RoBertaForSequenceClassification from bazina +author: John Snow Labs +name: roberta_base_ewc_15_then_16_fakenews_2_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_ewc_15_then_16_fakenews_2_pipeline` is a English model originally trained by bazina. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_ewc_15_then_16_fakenews_2_pipeline_en_5.5.0_3.0_1725920307705.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_ewc_15_then_16_fakenews_2_pipeline_en_5.5.0_3.0_1725920307705.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_ewc_15_then_16_fakenews_2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_ewc_15_then_16_fakenews_2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_ewc_15_then_16_fakenews_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|433.8 MB| + +## References + +https://huggingface.co/bazina/roberta-base-ewc-15-then-16-fakenews-2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_fine_disaster_tweets_part3_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_fine_disaster_tweets_part3_en.md new file mode 100644 index 00000000000000..632c1e11b0360b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_fine_disaster_tweets_part3_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_fine_disaster_tweets_part3 RoBertaForSequenceClassification from victorbahlangene +author: John Snow Labs +name: roberta_base_fine_disaster_tweets_part3 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_fine_disaster_tweets_part3` is a English model originally trained by victorbahlangene. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_fine_disaster_tweets_part3_en_5.5.0_3.0_1725920124631.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_fine_disaster_tweets_part3_en_5.5.0_3.0_1725920124631.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_fine_disaster_tweets_part3","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_fine_disaster_tweets_part3", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_fine_disaster_tweets_part3| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|449.3 MB| + +## References + +https://huggingface.co/victorbahlangene/roberta-base-fine-Disaster-Tweets-Part3 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_3d_sentiment_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_3d_sentiment_en.md new file mode 100644 index 00000000000000..2c0fc9c942cb72 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_3d_sentiment_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_finetuned_3d_sentiment RoBertaForSequenceClassification from venetis +author: John Snow Labs +name: roberta_base_finetuned_3d_sentiment +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_finetuned_3d_sentiment` is a English model originally trained by venetis. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_3d_sentiment_en_5.5.0_3.0_1725902135663.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_3d_sentiment_en_5.5.0_3.0_1725902135663.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_finetuned_3d_sentiment","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_finetuned_3d_sentiment", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_finetuned_3d_sentiment| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|455.8 MB| + +## References + +https://huggingface.co/venetis/roberta-base-finetuned-3d-sentiment \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_dmitva_ai_and_human_generated_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_dmitva_ai_and_human_generated_en.md new file mode 100644 index 00000000000000..8a78b3c3ace64a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_dmitva_ai_and_human_generated_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_finetuned_dmitva_ai_and_human_generated RoBertaForSequenceClassification from SkwarczynskiP +author: John Snow Labs +name: roberta_base_finetuned_dmitva_ai_and_human_generated +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_finetuned_dmitva_ai_and_human_generated` is a English model originally trained by SkwarczynskiP. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_dmitva_ai_and_human_generated_en_5.5.0_3.0_1725902604463.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_dmitva_ai_and_human_generated_en_5.5.0_3.0_1725902604463.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_finetuned_dmitva_ai_and_human_generated","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_finetuned_dmitva_ai_and_human_generated", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_finetuned_dmitva_ai_and_human_generated| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|425.7 MB| + +## References + +https://huggingface.co/SkwarczynskiP/roberta-base-finetuned-dmitva-AI-and-human-generated \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_dmitva_ai_and_human_generated_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_dmitva_ai_and_human_generated_pipeline_en.md new file mode 100644 index 00000000000000..84b0b859c91242 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_dmitva_ai_and_human_generated_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_finetuned_dmitva_ai_and_human_generated_pipeline pipeline RoBertaForSequenceClassification from SkwarczynskiP +author: John Snow Labs +name: roberta_base_finetuned_dmitva_ai_and_human_generated_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_finetuned_dmitva_ai_and_human_generated_pipeline` is a English model originally trained by SkwarczynskiP. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_dmitva_ai_and_human_generated_pipeline_en_5.5.0_3.0_1725902637461.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_dmitva_ai_and_human_generated_pipeline_en_5.5.0_3.0_1725902637461.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_finetuned_dmitva_ai_and_human_generated_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_finetuned_dmitva_ai_and_human_generated_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_finetuned_dmitva_ai_and_human_generated_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|425.7 MB| + +## References + +https://huggingface.co/SkwarczynskiP/roberta-base-finetuned-dmitva-AI-and-human-generated + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_hotpot_qa_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_hotpot_qa_en.md new file mode 100644 index 00000000000000..ad243598adb881 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_hotpot_qa_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English roberta_base_finetuned_hotpot_qa RoBertaForQuestionAnswering from vish88 +author: John Snow Labs +name: roberta_base_finetuned_hotpot_qa +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_finetuned_hotpot_qa` is a English model originally trained by vish88. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_hotpot_qa_en_5.5.0_3.0_1725867158700.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_hotpot_qa_en_5.5.0_3.0_1725867158700.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("roberta_base_finetuned_hotpot_qa","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("roberta_base_finetuned_hotpot_qa", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_finetuned_hotpot_qa| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|463.9 MB| + +## References + +https://huggingface.co/vish88/roberta-base-finetuned-hotpot_qa \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_intent_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_intent_en.md new file mode 100644 index 00000000000000..7d6cc4a37fa749 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_intent_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_finetuned_intent RoBertaForSequenceClassification from zhiyil +author: John Snow Labs +name: roberta_base_finetuned_intent +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_finetuned_intent` is a English model originally trained by zhiyil. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_intent_en_5.5.0_3.0_1725911492003.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_intent_en_5.5.0_3.0_1725911492003.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_finetuned_intent","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_finetuned_intent", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_finetuned_intent| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|299.7 MB| + +## References + +https://huggingface.co/zhiyil/roberta-base-finetuned-intent \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_en.md new file mode 100644 index 00000000000000..edbfb142c02c23 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted RoBertaForSequenceClassification from ben-yu +author: John Snow Labs +name: roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted` is a English model originally trained by ben-yu. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_en_5.5.0_3.0_1725911995618.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_en_5.5.0_3.0_1725911995618.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|439.3 MB| + +## References + +https://huggingface.co/ben-yu/roberta-base-finetuned-nlp-letters-s1_s2-pronouns-class-weighted \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_pipeline_en.md new file mode 100644 index 00000000000000..c8626c5abad46b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_pipeline pipeline RoBertaForSequenceClassification from ben-yu +author: John Snow Labs +name: roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_pipeline` is a English model originally trained by ben-yu. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_pipeline_en_5.5.0_3.0_1725912028884.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_pipeline_en_5.5.0_3.0_1725912028884.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_finetuned_nlp_letters_s1_s2_pronouns_class_weighted_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|439.3 MB| + +## References + +https://huggingface.co/ben-yu/roberta-base-finetuned-nlp-letters-s1_s2-pronouns-class-weighted + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_meta_tuning_test_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_meta_tuning_test_en.md new file mode 100644 index 00000000000000..5a183de128f1f0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_meta_tuning_test_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_meta_tuning_test RoBertaForSequenceClassification from ruiqi-zhong +author: John Snow Labs +name: roberta_base_meta_tuning_test +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_meta_tuning_test` is a English model originally trained by ruiqi-zhong. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_meta_tuning_test_en_5.5.0_3.0_1725902169419.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_meta_tuning_test_en_5.5.0_3.0_1725902169419.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_meta_tuning_test","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_meta_tuning_test", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_meta_tuning_test| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|459.3 MB| + +## References + +https://huggingface.co/ruiqi-zhong/roberta-base-meta-tuning-test \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_meta_tuning_test_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_meta_tuning_test_pipeline_en.md new file mode 100644 index 00000000000000..0308482c2b6a8e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_meta_tuning_test_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_meta_tuning_test_pipeline pipeline RoBertaForSequenceClassification from ruiqi-zhong +author: John Snow Labs +name: roberta_base_meta_tuning_test_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_meta_tuning_test_pipeline` is a English model originally trained by ruiqi-zhong. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_meta_tuning_test_pipeline_en_5.5.0_3.0_1725902192531.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_meta_tuning_test_pipeline_en_5.5.0_3.0_1725902192531.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_meta_tuning_test_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_meta_tuning_test_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_meta_tuning_test_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|459.3 MB| + +## References + +https://huggingface.co/ruiqi-zhong/roberta-base-meta-tuning-test + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_squad2_squad_k5_e3_full_finetune_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_squad2_squad_k5_e3_full_finetune_en.md new file mode 100644 index 00000000000000..3fc7861eb7422c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_squad2_squad_k5_e3_full_finetune_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English roberta_base_squad2_squad_k5_e3_full_finetune RoBertaForQuestionAnswering from umarzein +author: John Snow Labs +name: roberta_base_squad2_squad_k5_e3_full_finetune +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_squad2_squad_k5_e3_full_finetune` is a English model originally trained by umarzein. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_squad2_squad_k5_e3_full_finetune_en_5.5.0_3.0_1725875976064.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_squad2_squad_k5_e3_full_finetune_en_5.5.0_3.0_1725875976064.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("roberta_base_squad2_squad_k5_e3_full_finetune","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("roberta_base_squad2_squad_k5_e3_full_finetune", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_squad2_squad_k5_e3_full_finetune| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|463.7 MB| + +## References + +https://huggingface.co/umarzein/roberta-base-squad2-squad-k5-e3-full-finetune \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_base_topic_classification_simple2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_topic_classification_simple2_pipeline_en.md new file mode 100644 index 00000000000000..afbc9d1ac18cc8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_base_topic_classification_simple2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_topic_classification_simple2_pipeline pipeline RoBertaForSequenceClassification from Ahmed235 +author: John Snow Labs +name: roberta_base_topic_classification_simple2_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_topic_classification_simple2_pipeline` is a English model originally trained by Ahmed235. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_topic_classification_simple2_pipeline_en_5.5.0_3.0_1725904786942.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_topic_classification_simple2_pipeline_en_5.5.0_3.0_1725904786942.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_topic_classification_simple2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_topic_classification_simple2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_topic_classification_simple2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|659.1 MB| + +## References + +https://huggingface.co/Ahmed235/roberta-base-topic_classification_simple2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_emotion_mudogruer_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_emotion_mudogruer_en.md new file mode 100644 index 00000000000000..96e9c0da23e440 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_emotion_mudogruer_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_emotion_mudogruer RoBertaForSequenceClassification from mudogruer +author: John Snow Labs +name: roberta_emotion_mudogruer +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_emotion_mudogruer` is a English model originally trained by mudogruer. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_emotion_mudogruer_en_5.5.0_3.0_1725903682177.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_emotion_mudogruer_en_5.5.0_3.0_1725903682177.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_emotion_mudogruer","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_emotion_mudogruer", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_emotion_mudogruer| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|441.8 MB| + +## References + +https://huggingface.co/mudogruer/roberta-emotion \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_emotion_mudogruer_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_emotion_mudogruer_pipeline_en.md new file mode 100644 index 00000000000000..bf6c103a6aac0f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_emotion_mudogruer_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_emotion_mudogruer_pipeline pipeline RoBertaForSequenceClassification from mudogruer +author: John Snow Labs +name: roberta_emotion_mudogruer_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_emotion_mudogruer_pipeline` is a English model originally trained by mudogruer. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_emotion_mudogruer_pipeline_en_5.5.0_3.0_1725903708599.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_emotion_mudogruer_pipeline_en_5.5.0_3.0_1725903708599.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_emotion_mudogruer_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_emotion_mudogruer_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_emotion_mudogruer_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|441.9 MB| + +## References + +https://huggingface.co/mudogruer/roberta-emotion + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_finetuned_squadcovid_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_finetuned_squadcovid_pipeline_en.md new file mode 100644 index 00000000000000..4bd69e013ea2e7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_finetuned_squadcovid_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English roberta_finetuned_squadcovid_pipeline pipeline RoBertaForQuestionAnswering from Rahul13 +author: John Snow Labs +name: roberta_finetuned_squadcovid_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_finetuned_squadcovid_pipeline` is a English model originally trained by Rahul13. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_finetuned_squadcovid_pipeline_en_5.5.0_3.0_1725875797615.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_finetuned_squadcovid_pipeline_en_5.5.0_3.0_1725875797615.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_finetuned_squadcovid_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_finetuned_squadcovid_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_finetuned_squadcovid_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|464.0 MB| + +## References + +https://huggingface.co/Rahul13/roberta-finetuned-squadcovid + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_finetuned_subjqa_chennaiqa_expanded_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_finetuned_subjqa_chennaiqa_expanded_en.md new file mode 100644 index 00000000000000..706a48afc99817 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_finetuned_subjqa_chennaiqa_expanded_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English roberta_finetuned_subjqa_chennaiqa_expanded RoBertaForQuestionAnswering from aditi2212 +author: John Snow Labs +name: roberta_finetuned_subjqa_chennaiqa_expanded +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_finetuned_subjqa_chennaiqa_expanded` is a English model originally trained by aditi2212. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_finetuned_subjqa_chennaiqa_expanded_en_5.5.0_3.0_1725876091123.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_finetuned_subjqa_chennaiqa_expanded_en_5.5.0_3.0_1725876091123.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("roberta_finetuned_subjqa_chennaiqa_expanded","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("roberta_finetuned_subjqa_chennaiqa_expanded", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_finetuned_subjqa_chennaiqa_expanded| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|463.6 MB| + +## References + +https://huggingface.co/aditi2212/roberta-finetuned-subjqa-ChennaiQA-expanded \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_finetuned_subjqa_movies_2_vishwasbhushanb_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_finetuned_subjqa_movies_2_vishwasbhushanb_pipeline_en.md new file mode 100644 index 00000000000000..208638b01ddd97 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_finetuned_subjqa_movies_2_vishwasbhushanb_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English roberta_finetuned_subjqa_movies_2_vishwasbhushanb_pipeline pipeline RoBertaForQuestionAnswering from VishwasBhushanB +author: John Snow Labs +name: roberta_finetuned_subjqa_movies_2_vishwasbhushanb_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_finetuned_subjqa_movies_2_vishwasbhushanb_pipeline` is a English model originally trained by VishwasBhushanB. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_finetuned_subjqa_movies_2_vishwasbhushanb_pipeline_en_5.5.0_3.0_1725876387268.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_finetuned_subjqa_movies_2_vishwasbhushanb_pipeline_en_5.5.0_3.0_1725876387268.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_finetuned_subjqa_movies_2_vishwasbhushanb_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_finetuned_subjqa_movies_2_vishwasbhushanb_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_finetuned_subjqa_movies_2_vishwasbhushanb_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|464.1 MB| + +## References + +https://huggingface.co/VishwasBhushanB/roberta-finetuned-subjqa-movies_2 + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_indosquadv2_1691411576_8_2e_05_0_01_5_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_indosquadv2_1691411576_8_2e_05_0_01_5_pipeline_en.md new file mode 100644 index 00000000000000..5ce0486290fda8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_indosquadv2_1691411576_8_2e_05_0_01_5_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English roberta_indosquadv2_1691411576_8_2e_05_0_01_5_pipeline pipeline RoBertaForQuestionAnswering from rizquuula +author: John Snow Labs +name: roberta_indosquadv2_1691411576_8_2e_05_0_01_5_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_indosquadv2_1691411576_8_2e_05_0_01_5_pipeline` is a English model originally trained by rizquuula. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_indosquadv2_1691411576_8_2e_05_0_01_5_pipeline_en_5.5.0_3.0_1725876097634.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_indosquadv2_1691411576_8_2e_05_0_01_5_pipeline_en_5.5.0_3.0_1725876097634.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_indosquadv2_1691411576_8_2e_05_0_01_5_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_indosquadv2_1691411576_8_2e_05_0_01_5_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_indosquadv2_1691411576_8_2e_05_0_01_5_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|423.2 MB| + +## References + +https://huggingface.co/rizquuula/RoBERTa-IndoSQuADv2_1691411576-8-2e-05-0.01-5 + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_large_few_shot_k_1024_finetuned_squad_seed_2_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_large_few_shot_k_1024_finetuned_squad_seed_2_en.md new file mode 100644 index 00000000000000..1238c98e98e7e7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_large_few_shot_k_1024_finetuned_squad_seed_2_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English roberta_large_few_shot_k_1024_finetuned_squad_seed_2 RoBertaForQuestionAnswering from anas-awadalla +author: John Snow Labs +name: roberta_large_few_shot_k_1024_finetuned_squad_seed_2 +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_large_few_shot_k_1024_finetuned_squad_seed_2` is a English model originally trained by anas-awadalla. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_large_few_shot_k_1024_finetuned_squad_seed_2_en_5.5.0_3.0_1725875865466.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_large_few_shot_k_1024_finetuned_squad_seed_2_en_5.5.0_3.0_1725875865466.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("roberta_large_few_shot_k_1024_finetuned_squad_seed_2","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("roberta_large_few_shot_k_1024_finetuned_squad_seed_2", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_large_few_shot_k_1024_finetuned_squad_seed_2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/anas-awadalla/roberta-large-few-shot-k-1024-finetuned-squad-seed-2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_large_go_emotions_3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_large_go_emotions_3_pipeline_en.md new file mode 100644 index 00000000000000..f9b55dc56fc536 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_large_go_emotions_3_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_large_go_emotions_3_pipeline pipeline RoBertaForSequenceClassification from tasinhoque +author: John Snow Labs +name: roberta_large_go_emotions_3_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_large_go_emotions_3_pipeline` is a English model originally trained by tasinhoque. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_large_go_emotions_3_pipeline_en_5.5.0_3.0_1725904874016.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_large_go_emotions_3_pipeline_en_5.5.0_3.0_1725904874016.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_large_go_emotions_3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_large_go_emotions_3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_large_go_emotions_3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/tasinhoque/roberta-large-go-emotions-3 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_large_mrpc_lora_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_large_mrpc_lora_en.md new file mode 100644 index 00000000000000..d2de3d043dfcb6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_large_mrpc_lora_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_large_mrpc_lora RoBertaForSequenceClassification from FelixChao +author: John Snow Labs +name: roberta_large_mrpc_lora +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_large_mrpc_lora` is a English model originally trained by FelixChao. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_large_mrpc_lora_en_5.5.0_3.0_1725912182583.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_large_mrpc_lora_en_5.5.0_3.0_1725912182583.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_large_mrpc_lora","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_large_mrpc_lora", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_large_mrpc_lora| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/FelixChao/roberta-large-mrpc-lora \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_large_rte_jamesdborin_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_large_rte_jamesdborin_pipeline_en.md new file mode 100644 index 00000000000000..0872f6e028c776 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_large_rte_jamesdborin_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_large_rte_jamesdborin_pipeline pipeline RoBertaForSequenceClassification from jamesdborin +author: John Snow Labs +name: roberta_large_rte_jamesdborin_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_large_rte_jamesdborin_pipeline` is a English model originally trained by jamesdborin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_large_rte_jamesdborin_pipeline_en_5.5.0_3.0_1725902721767.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_large_rte_jamesdborin_pipeline_en_5.5.0_3.0_1725902721767.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_large_rte_jamesdborin_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_large_rte_jamesdborin_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_large_rte_jamesdborin_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/jamesdborin/Roberta-Large-RTE + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_legal_german_cased_german_legal_squad_part_augmented_1000_de.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_legal_german_cased_german_legal_squad_part_augmented_1000_de.md new file mode 100644 index 00000000000000..895217c2766412 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_legal_german_cased_german_legal_squad_part_augmented_1000_de.md @@ -0,0 +1,86 @@ +--- +layout: model +title: German roberta_legal_german_cased_german_legal_squad_part_augmented_1000 RoBertaForQuestionAnswering from farid1088 +author: John Snow Labs +name: roberta_legal_german_cased_german_legal_squad_part_augmented_1000 +date: 2024-09-09 +tags: [de, open_source, onnx, question_answering, roberta] +task: Question Answering +language: de +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_legal_german_cased_german_legal_squad_part_augmented_1000` is a German model originally trained by farid1088. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_legal_german_cased_german_legal_squad_part_augmented_1000_de_5.5.0_3.0_1725867309591.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_legal_german_cased_german_legal_squad_part_augmented_1000_de_5.5.0_3.0_1725867309591.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("roberta_legal_german_cased_german_legal_squad_part_augmented_1000","de") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("roberta_legal_german_cased_german_legal_squad_part_augmented_1000", "de") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_legal_german_cased_german_legal_squad_part_augmented_1000| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|de| +|Size:|465.8 MB| + +## References + +https://huggingface.co/farid1088/RoBERTa-legal-de-cased_German_legal_SQuAD_part_augmented_1000 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_ner_polygot_MT4TS_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_ner_polygot_MT4TS_en.md new file mode 100644 index 00000000000000..fa382d4186377b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_ner_polygot_MT4TS_en.md @@ -0,0 +1,106 @@ +--- +layout: model +title: English RobertaForTokenClassification Cased model (from kevinjesse) +author: John Snow Labs +name: roberta_ner_polygot_MT4TS +date: 2024-09-09 +tags: [bert, ner, open_source, en, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RobertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `polygot-MT4TS` is a English model originally trained by `kevinjesse`. + +## Predicted Entities + +`Framebuffer`, `CryptoService`, `ClientLocation`, `AliasMapEntry`, `KeysType`, `CodeGen`, `DOMExplorerClient`, `GetAssessmentCommandInput`, `OnSaveProps`, `IKeyBinding`, `FirmwareUpdateMetaDataCC`, `DropHandlerProps`, `UpdateApplicationCommand`, `ItemProperties`, `CreateTopicResponse`, `HttpInterceptor`, `ClusterOptions`, `ex.PostUpdateEvent`, `AddTagsToResourceCommand`, `HookContext`, `LogFormatter`, `VisibilityEdge`, `BaseRoute`, `IpcRendererService`, `VocabularySortType`, `TweetItem`, `TwStyle`, `AbstractStatusBarLabelItem`, `GetByEmailAccountsValidationResult`, `TypeValues`, `Utxo`, `IGetRefParamsExternal`, `SubDirectory`, `Promisable`, `TRoutes`, `HostRef`, `DeleteGroupCommandInput`, `TMigrableEnvelope`, `ContextWithActive`, `CreatedInstance`, `ErrorPaths`, `XYZ`, `MatGridList`, `TemplateCategory`, `InterfaceVpcEndpoint`, `Config3D`, `DebounceOptions`, `HighlightSet`, `AnyRenderFunction`, `CompositeAnimation`, `BitstreamFormat`, `NamedImport`, `ProviderOverride`, `MonitoringOutput`, `KeywordCxt`, `msRest.OperationParameter`, `TSVBTables`, `WorkspaceFoldersChangeEvent`, `IReporter`, `KeyframeTrack`, `ObservableArrayAdministration`, `IGitExtension`, `PoolingService`, `BasicCredentialHandler`, `TNSDOMMatrixBase`, `ParseSuccess`, `AggregationData`, `WebSiteManagementModels.StringDictionary`, `PossibilityChild`, `SnackBarService`, `FlushEventArgs`, `DataTableDirective`, `RoseChartSlice`, `BINModelInstance`, `DefaultProps`, `ValueState`, `Box`, `ValidationFuncArg`, `ast.Grammar`, `IToolbarProps`, `RtcpRrPacket`, `ProductUpdateReason`, `MappingPatternInfo`, `GroupKeysOrKeyFn`, `ArrayBindingElement`, `GenerateFileCommandParameters`, `PrivateInstance`, `t.Type`, `StartTransformsRequestSchema`, `CreateDistributionCommandInput`, `GeoCoordLike`, `GraphQLModules.ModuleContext`, `ISPUser`, `IceCandidate`, `AddToQueryLogDependencies`, `InputObjectType`, `ProgressBarState`, `EventHandlerInfosForElement`, `StorageObjectAck`, `MorphOptions`, `CppConfigItem`, `TxInput`, `NotionalAndUnrealizedPnlReturns`, `PortingLocation`, `Check`, `SetRepositoryPolicyCommandInput`, `AlertIconProps`, `ProgramIds`, `IColorMappingFunction`, `GetFieldFormat`, `lsp.TextDocumentPositionParams`, `SInt64`, `LitvisNarrative`, `DescribeJobsCommandInput`, `WorkspaceNode`, `YargsArgs`, `requests.ListLimitValuesRequest`, `TerminalService`, `requests.ListUserAnalyticsRequest`, `Group.Scalar`, `ParsedOptions`, `NavbarProps`, `RoleProps`, `OAuthException`, `AuthPartialState`, `Hooks`, `DeprecationsRegistry`, `MockCacheService`, `IFieldCustomizerCellEventParameters`, `ILauncher`, `RelationAttrInfo`, `IEditor`, `HierarchyOfMaps`, `SpellInfoDetails`, `ProcessId`, `ServerSideTransactionResult`, `MotionResult`, `ResolvedConceptAtomType`, `Club`, `HttpContentType`, `QuestionSelectBase`, `UIRoastingMachineStorage`, `BSPTraversalAction`, `cc.BoxCollider`, `FixCreator`, `DeleteClusterRequest`, `FacebookAuthProvider`, `StructureListMember`, `webpack.RuleSetRule`, `configuration.Publications`, `VerifyRes`, `S3Service`, `RequestParser`, `ArgsOf`, `types.Transport`, `Toxic`, `MonitoringAdapter`, `MockOracleInstance`, `IActionMethodAttribute`, `SlotData`, `UserData`, `SearchPredicate`, `FilterFunctionReturnType`, `AccountsService`, `DeleteRepositoryPayload`, `TSESTree.Expression`, `ITaskItem`, `QueryToken`, `DashboardState`, `ElasticsearchServiceStart`, `BridgeableGuildChannel`, `Const`, `btQuaternion`, `CustomFunctions`, `JWK`, `evt_disasm_sub`, `TabifyBuckets`, `Pokedex`, `EscrowedPayment`, `Hsla`, `Promise`, `IMetadataStorage`, `ModalsStateEntry`, `HttpRequestConfig`, `UserFacingSerializedSingleAssetDataTypes`, `ExpenseCategoriesService`, `CanvasMethod`, `GX.AttenuationFunction`, `PDFHexString`, `DataPersistence`, `TagMap`, `LayertreeItemNode`, `LoggerClient`, `SignatureOptions`, `StaticQueryDocument`, `SendMessageCommandInput`, `InputOperation`, `ParticleEmitter2Object`, `SavedObjectsUpdateOptions`, `TransferBtcBasedBlockchain`, `CaptureStdout`, `RemoteProvider`, `LabelPosition`, `DeleteThemeCommandInput`, `PersonType`, `NotificationConfig`, `BasicDataProvider`, `RequestStatus`, `MessageBoxOptions`, `LightSetting`, `TranscriptConsequenceSummary`, `RgbColor`, `GLRenderHash`, `AnyObject`, `MapEntity`, `url.URL`, `IPolygonData`, `QueuedResponse`, `OfficeMockObject`, `DayKey`, `ListControllerProps`, `LoadCollections`, `CallArgs`, `DeleteApplicationInputProcessingConfigurationCommandInput`, `IRenderInfo`, `EnvironmentConfig`, `HIDDevice`, `TransmartCountItem`, `ExpShapeSymbol`, `HsSaveMapManagerService`, `Runtime`, `RowData`, `ObservableState`, `TemplateSpan`, `TargetColumnGeometry`, `BarcodeMetadata`, `DescribePendingMaintenanceActionsMessage`, `DistributionProps`, `KeyObject`, `IOpdsLinkView`, `FormattedStatus`, `def.Matrix44`, `IFileWithMeta`, `IGaeaSetting`, `SVGNode`, `DateInputProps`, `DejaPopupReponse`, `TaskStore`, `OpenLinkComponent`, `ComponentName`, `NodeSpec`, `IMrepoDigestConfigFilePath`, `SLabel`, `Themes`, `SingleAssetTwoPartyIntermediaryAgreement`, `Child`, `GeoAngle`, `WorkSheet`, `IApplicationHealthStateChunk`, `LongitudeLatitudeNumber`, `postcss.AtRule`, `RetentionPeriod`, `RuleActionChange`, `OPCUAServerEndPoint`, `AtomList`, `SelectOptionProps`, `BoardType`, `ChannelCredentials`, `DirectoryWatcherCallback`, `ComponentFunction`, `StationComplementPlugin`, `BaThemeConfigProvider`, `StackGroupConfigNode`, `TorrentInfo.Info`, `ProfileNode`, `BrowsePath`, `INodeProperties`, `MockDomController`, `Campaign`, `ENDAttribute`, `PositionStyleProps`, `LogAnalyticsLabelAlias`, `inquirerTypes.PromptModule`, `cc.Vec2`, `ast.ParserRule`, `RTCSctpCapabilities`, `androidx.fragment.app.FragmentManager`, `LeaveAction`, `KeyCombine`, `WorkRequestCollection`, `DinoErrorController`, `ISearchResultState`, `IfScope`, `CacheType`, `ActionCreator`, `IPane`, `ClassStaticBlockDeclaration`, `SocketProxy`, `ResponseMetadata`, `ParseIterator`, `StartCliConfig`, `DmmfDocument`, `ActionStatus`, `ERC1155OperatorMock`, `IBucketHistogramAggConfig`, `NumberArray`, `ChainID.Mainnet`, `ArgGetter`, `KeyPairEthereumPaymentsConfig`, `SubModel`, `ts.PropertySignature`, `ListLeaderboardRecordsAroundOwnerRequest`, `NotebookInfo`, `FakeProgbar`, `TRPCClientError`, `IAssetActionTypes`, `RequestEnvelope`, `TelemetryWorker`, `DeployStacksIO`, `TamaguiInternalConfig`, `RequestAPI`, `CommentUI`, `DestinationCertificate`, `GbTreeNode`, `symbol`, `LowpassCombFilter`, `EntityElements`, `PreparedQuery`, `RStream`, `AutoImporter`, `Pack`, `MetaReducer`, `CeramicClient`, `MetaIndexState`, `MetroConfig`, `IFlavorInfo`, `IPC.IShellProcess`, `IDeliState`, `IListMultipleData`, `PullRequestOpened`, `HandlerInput`, `DataManager`, `RangeFilter`, `IContextErrorData`, `Joplin`, `IProjectType`, `StackStyleProps`, `ControlledStateProperies`, `EasyPZ`, `SymbolParam`, `SessionRefreshRequest`, `Env`, `ChannelListItem`, `ICustomValidatorResponse`, `KeyboardProps`, `Solo`, `RelativeLink`, `TransactionOptions`, `ConnectionArguments`, `HoverInsertActions`, `Types.ReadyType`, `_DeepReadonlyObject`, `StorageProvider`, `QueryOrdersRequest`, `VideoStreamIndex`, `ICharaProfile`, `reduxForm.ConverterForm`, `IBaseTaskParams`, `EqualityMap`, `CodeBlockWriter`, `HitSensorInfo`, `JRPCEngineEndCallback`, `AndroidOutput`, `RenderPassContext`, `WebGLContextAttributes`, `TableForeignKey`, `PutLoggingOptionsCommandInput`, `actions.Args`, `PublicModelRouter`, `CodelistService`, `WhereFilterOp`, `IConvectorControllerClient`, `Refiner`, `ReadAllCallback`, `ShadowsocksManagerService`, `SchemaComparator`, `Monitor`, `ProjectConfigData`, `UpdateAssetCommandInput`, `IUpsertScalingPolicyCommand`, `ResourceTypeSummary`, `InfluxVersion`, `HtmlTemplate`, `GraphQLRequestContextDidResolveOperation`, `Perspective`, `CompareLookup`, `Contact`, `SetIamPolicyRequest`, `CollectionManifest`, `MetadataReader`, `MetaesContext`, `CryptographyService`, `ProviderDef`, `BoxStyleProps`, `UberChoice`, `STColumn`, `TipsLabels`, `SymbolWithParent`, `FilePropertyProps`, `AnyRawBuilder`, `K6`, `MockRequestInit`, `DiscordMessageReactionAdd`, `MessageInfo`, `LogicOperator`, `TStyle`, `IBookmarkState`, `Animate`, `ChangeEventHandler`, `NormalizedOptions`, `DeleteServiceRequest`, `PluralSub`, `ContractCallPayload`, `SerializedError`, `vscode.TextEditorSelectionChangeEvent`, `LicenseStatus`, `SourceDescription`, `ClsService`, `xDatum`, `RequestLimitConfig`, `RegExpExecArray`, `MarkedString`, `ClampedValue`, `ProcessExecution`, `FetchedBalances`, `ModelerFourOptions`, `MidiInstrument`, `Callable`, `CloudFormation`, `IViewHandler`, `BlockPos`, `Just`, `TVariables`, `PrimitiveValue`, `NamedExoticComponent`, `TClientData`, `IFriend`, `V.Scheme`, `MyButton`, `AsyncIterableQuery`, `StackUndeployOperation`, `TreeConfiguration`, `com.google.ar.sceneform.Node`, `DiagnosticsCallFeatureState`, `ComponentSID`, `TSContinue`, `LocalStorageIndex`, `Webview`, `ReactFramework`, `ThySlideConfig`, `MangoCache`, `jsdoc.Annotation`, `PSPoint`, `TabFragmentImplementation`, `InputWithModel`, `BufferViewResult`, `SavedObjectsClient`, `TxStatus`, `TimelineChartRange`, `ExperimentalStickering`, `MarkMap`, `ResourceArray`, `InputOption`, `IPnpmShrinkwrapDependencyYaml`, `TextClassification`, `PvsFormula`, `IBlob`, `MerkleInclusionQuantifier`, `Tx`, `PartialState`, `ListPositionCacheEntry`, `TestToken`, `MetricsStore`, `ExpandPanelActionContext`, `PageListProps`, `EmailActionConnector`, `AllureTest`, `SharingUpdate`, `EnumerationDefinitionSchema`, `IBufferLine`, `cp.SpawnOptions`, `OpenSearchRawResponseExpressionTypeDefinition`, `ComponentComment`, `EvaluationConfig`, `ScopeFilter`, `IAppConfig`, `QualifiedRules`, `IMutationTree`, `BaseManifestGenerator`, `NowResponse`, `GetRecordsCommandInput`, `LogParams`, `QuoteOptions`, `FinancialViewEntry`, `DecryptionMaterial`, `AnyKey`, `EdgeAttributes`, `OptionObject`, `IRECProduct`, `VisualizationDimensionGroupConfig`, `IInternalActionContext`, `SlpTransactionDetails`, `RouteMatcher`, `MiddlewareFactory`, `RectShape`, `IDropDownTreeViewNode`, `StyleTokens`, `NewTootState`, `SendChannelMessageCommandInput`, `LuaThread`, `ECS`, `Entry`, `Anchored`, `OnPreResponseToolkit`, `CategorizedSettings`, `IParseHandler`, `ArcoOptions`, `ConfigYaml`, `KGS.DataDigest`, `ChainId`, `CookiesOptions`, `DialogProperty`, `TranslationConfig`, `DestinationConfig`, `EdgeData`, `BaseTx`, `NgxsWebsocketPluginOptions`, `TRgb`, `SearchFormLayoutProps`, `PoolState`, `OrganizationProjectService`, `VscodeWrapper`, `MatchNode`, `AlertContextOptions`, `Rec`, `XRFrame`, `RegistrationData`, `dScnKy_env_light_c`, `ValueFilterPlugin`, `GfxRenderInstManager`, `SearchSource`, `CallHierarchyService`, `NoteNumberOrName`, `PagedResp`, `DaySpan`, `ExtendedPOIDetails`, `CreateTableCommandInput`, `Menu`, `RawSavedDashboardPanel630`, `InitializeHandlerArguments`, `OpenCVOperatipnParams`, `MatAutocompleteSelectedEvent`, `ReferencesResult`, `PrivKey`, `ColExpression`, `Planet`, `AttributeKey`, `FsReadOptions`, `ChainMergeContext`, `ConfirmOptions`, `WsKey`, `HsCommonEndpointsService`, `FetchResult`, `IPatchList`, `StartApplicationCommandInput`, `t.ValidationError`, `PyChessModel`, `YarnLock`, `TeamService`, `MockTextNode`, `XMessageOption`, `AxisProps`, `Results`, `ExclusionVisibleVirtualNode`, `FixedDomPosition`, `SignedBlockType`, `SyntheticKeyboardEvent`, `FontCatalog`, `NextCharacter`, `MiBrushRepaintConfig`, `DejaSnackbarComponent`, `ExportOptions`, `AnimatedComponent`, `HandleReference`, `Awaiter`, `ConnectionHealthData`, `RequestEntity`, `TableColumnDirective`, `GltfAsset`, `GetReferenceOptions`, `VcsService`, `EPObject`, `IExternalHooksFunctions`, `ParticipantTracks`, `GraphQLSchemaPlugin`, `IRenderMime.IMimeModel`, `KeyframeNode`, `FormEvent`, `UpSetThemes`, `TwoSlashReturn`, `AppRecord`, `PageResource`, `FilePaths`, `IContextualMenuItem`, `ListenerType`, `ButteryNode`, `LoadingService`, `SimpleScalarXmlPropertiesCommandInput`, `d.OutputTargetDistCustomElements`, `KueryNode`, `DescribeDBSnapshotsCommandInput`, `RoutedPoint`, `MockRule`, `FeedId`, `PointLike`, `VqlClient`, `NetworkResolver`, `LoadableClassComponent`, `MenuData`, `ImageFilter`, `INvModule`, `SearchableItemPresenter`, `PublisherProperties`, `StyleSheetType`, `PreferenceProviderDataChange`, `Moize`, `PrepareReactRender`, `DownloadOptions`, `crypto.BinaryLike`, `IAddressBookState`, `EPPrimitiveDependencies`, `ActionsSdkApp`, `Yield`, `Primitives.Numeric`, `WhenToMatchOptions`, `IReduxState`, `AtomRef`, `ServiceCatalogSummary`, `TodoItem`, `VcalDateOrDateTimeProperty`, `SaveDialogOptions`, `IHawkularAlertsManager`, `CommanderStatic`, `ValidationResults`, `NzModalRef`, `LegacyVars`, `SubTypeBuilder`, `GetConnectivityInfoCommandInput`, `TransferData`, `BeInspireTreeNodes`, `TreeMap`, `GainEntry`, `vault`, `JupiterOneClient`, `CoapResponse`, `cc.SpriteFrame`, `LocalizeRouterService`, `PercentileRanksMetricAggDependencies`, `FnN2`, `ArgTypes`, `AttachmentInfo`, `CupertinoDynamicColor`, `MockDataset`, `msRest.ServiceCallback`, `EntityNameExpression`, `LogoutOptions`, `XSession`, `Aperture`, `ThisTypeNode`, `HotswappableChangeCandidate`, `SourceInfo`, `TContext`, `ColorSwatchProps`, `RulesTestContext`, `LoggerNamespace`, `RippleBalanceMonitor`, `TransactionObject`, `IFilterModel`, `MovieDetails`, `ProgressListener`, `IE`, `Protocol.Network.ResponseReceivedEvent`, `ThyClickDispatcher`, `IMGUI`, `MyTypeDeclarative`, `TObj`, `PeopleIterator`, `MessageAttributeMap`, `KeyChange`, `PipeState`, `FloatAnimationTrack`, `ListApmDomainWorkRequestsRequest`, `AggregationResponse`, `CurrencyType`, `DialogContentProps`, `LogoProps`, `IChunk`, `ComponentRequestTable`, `BigQueryRetrievalRow`, `PermissionState`, `TabView`, `Tag`, `Vector3D`, `DeleteEventSubscriptionMessage`, `DateConstructor`, `NetworkContracts`, `NettuAppResponse`, `TypeLiteral`, `CoordinateXYZ`, `FactoryFunction`, `SkyhookDndService`, `IRawHealthEvaluation`, `LoadingManager`, `TableCellSlot`, `TabBarToolbarRegistry`, `ResponseFormat`, `ListFleetsCommandInput`, `ReduxLikeStateContainer`, `SendMessageOptions`, `SceneItem`, `ExtendedHttpsTestServer`, `SpacesService`, `IInspectorState`, `D3LinkNode`, `EndpointDescription`, `TypedThunk`, `VTF`, `RemoveTagsCommandInput`, `ChartPointSourceItem`, `Counter__factory`, `AccessKeyStorageJson`, `Diagram`, `ProductJson`, `PathNode`, `VpcSubnetArgs`, `TimelineEvent`, `GitHubEventModel`, `DebugOption`, `ImmutableCollection`, `AccountingRecord`, `IChainForkConfig`, `OpenPGPKey`, `AuthorizationRules`, `DeleteAccessPointCommandInput`, `core.CallbackOptionallyAsync`, `TerritoryAliasMap`, `CreateDedicatedIpPoolCommandInput`, `RecipientType`, `CalendarObject`, `InterfaceWithCallSignatureReturn`, `IObjectWillChange`, `JDevice`, `Paint`, `TextChannel`, `PostProcessingRule`, `DestroyArgv`, `WatchEvent`, `EventStreamSeed`, `IBaseImageryLayer`, `DAL.DEVICE_ID_COMPONENT`, `later`, `StopDeploymentCommandInput`, `P2PRequestPacketBufferData`, `EventListener`, `Text.JSON`, `GlobalParameter`, `Schema$RowData`, `interfaces.Target`, `React.FC`, `AnimationInternal`, `IfStmt`, `DefinitionElementProps`, `postcss.Node`, `MockComment`, `Insights`, `VisitResult`, `BlockbookConnectedConfig`, `Detector`, `ValidationRunData`, `AmmLiquidityPool`, `ITasksGetByContactState`, `VRMDebugOptions`, `UpdateThemeDto`, `CartItemsResponse`, `FileBox`, `MeetingCompositeStrings`, `tinycolor.Instance`, `BaseScreen`, `MarketResponse`, `TransactionGasPriceComputator`, `FactorySession`, `ScopeDef`, `LifecycleEvent`, `Tile`, `BundleEntry`, `Transducer`, `ScriptCompiler`, `UpdatePortalCommandInput`, `MouseMoveEvent`, `d.CompilerRequest`, `IProductOptionTranslatable`, `TradeType`, `ScriptInfo`, `Conversion`, `ControlPanelsContainerProps`, `NameIdentifierNode`, `VisitorInvocation`, `JournalMetadata`, `CompareFunction`, `ExprVisitor`, `LoginService`, `ModProperty`, `DragRef`, `Testrec`, `TClient`, `InstanceGeometryData`, `DescribeLimitsCommandInput`, `UiActionsPlugin`, `DatasourceOverwrite`, `CalculatorTestContext`, `EntryContext`, `SupportedEncoding`, `FileGroup`, `AggregateQuery`, `NetplayPlayer`, `PortalInfo`, `HeroesState`, `NormalizeStateContext`, `MockEnv`, `ModelQueryBuilderContract`, `AttrNode`, `providers.Provider`, `BrowserObject`, `GlobalVariantGroupConfig`, `SeverityLevel`, `GetPartitionIndexesCommandInput`, `DataSourceInstanceSettings`, `QuizReaction`, `ts.ElementAccessExpression`, `sdk.SpeechSynthesisResult`, `MiddlewareNext`, `Resp`, `DecoratorDef`, `ComponentStyle`, `Mesh3D`, `GcpCloudProvider`, `IRes`, `Nightmare`, `SHA512_256`, `GetSelector`, `RenderWizardArguments`, `InterfaceMapValue`, `ResizeObserver`, `TypeBinding`, `JoinedReturnType`, `FetchRequestId`, `JacobianPoint`, `ICoordinate`, `PaymentV1`, `ResourceMap`, `CodeBuilderConcrete`, `ResolvedDeclarationList`, `WalkMemberContext`, `requests.ListVolumeGroupBackupsRequest`, `SpecPickerInput`, `AccountRefresh`, `CheatModeMachineContext`, `CompletionSymbol`, `DateLocale`, `KontentItemInput`, `CommentDoc`, `SwaggerLambdas`, `MaterialParams`, `CompareFunc`, `ArgumentBuilder`, `Library`, `NoteEditorState`, `ViewableGrid`, `google.maps.Marker`, `TradeProvider`, `LineChart`, `NzUploadChangeParam`, `MockLogger`, `ModuleKey`, `DashboardData`, `pageNo`, `IconMenuItem`, `AzureDataTablesTestEntity`, `SObjectRefreshOutput`, `ChangeState`, `ITechnology`, `SessionStateControllerAction`, `InitializeParams`, `OrderedIndex`, `ShortConnectionDTO`, `Type_List`, `INotebookTracker`, `ColProps`, `StatefulSearchBarProps`, `ts.ExportSpecifier`, `StatusActionQueue`, `TRPCResponse`, `IUpdateStacksCommandArgs`, `RstStreamFrame`, `AvailableFeature`, `IFormControlContext`, `Dict`, `FormLayoutProps`, `ClientProxy`, `AdbSocket`, `INetworkPlayer`, `Valid`, `VolumeAttachment`, `AlertingRouter`, `OnPreAuthToolkit`, `ClJobs`, `SegmentDetail`, `IRecordReference`, `ICommandArguments`, `LitecoinAddressFormat.Modern`, `IAppInfo`, `CreateDBClusterCommandInput`, `ServerDataItem`, `IQueryConfig`, `MigrationResult`, `CellService`, `PluginCtx`, `SafeExpr`, `App.services.IRequestService`, `ProxyConfiguration`, `SingleSigHashMode`, `ButtonListenerCallback`, `FoundNodeFunction`, `MsgDeleteProvider`, `IOpts`, `SummaryNode`, `StashTabSnapshot`, `BlockProps`, `StarknetERC721ContextInterface`, `CustomRouteShorthandOptions`, `RestoreRequest`, `SGMark`, `InstructionParams`, `BUTTON_SHAPE`, `EndpointType`, `SearchResultsPage`, `LaunchOptions`, `TradingPosition`, `Epoch`, `DateRangeItemInfo`, `BaseResourceOptions`, `StringFilter`, `RuntimeField`, `NavbarService`, `IThrottleService`, `PopupModelConfig`, `PropertyValue`, `ITable`, `EPersonDataService`, `ObservableLanguagePair`, `DefaultRequestOptions`, `FullIconifyIcon`, `Immediate`, `SpeakerActions`, `LogFilter`, `BitfinexWebsocket`, `TD.DataSchema`, `ThemedComponentThis`, `Optimizer`, `Fee`, `ListDeliverabilityTestReportsCommandInput`, `StateUpdate`, `SourceControl`, `ShortcutObject`, `ISerDeDataSet`, `BookmarkTreeNode`, `ProviderService`, `GraphQLError`, `d.RobotsTxtOpts`, `SpeechSynthesisUtterance`, `EntityDocumentResult`, `IgnoredCommentContext`, `CanaryScope`, `analyze.Options`, `ProductVariantService`, `InventoryStat`, `SimpleButton`, `CmsEditorFieldRendererPlugin`, `AnimationDirection`, `CollectionObj`, `TestServer`, `EditingData`, `FeaturesList`, `requests.ListVmClustersRequest`, `TCompactProtocol`, `IRoomObject`, `ContractConfig`, `Text_2`, `ParamInstance`, `QueryFieldMap`, `RehypeNode`, `ForbiddenWordsInfo`, `IDataModel`, `NgStyleInterface`, `RouterNavigatedAction`, `AlgoliaClient`, `RouterMock`, `QuantityLabel`, `jsiiReflect.Type`, `MeterCCGet`, `SDKModels`, `GLenum`, `LegendItemList`, `PromiseBase`, `AadHttpClient`, `Tournament`, `VerdaccioError`, `Media`, `FindAndModifyWriteOpResultObject`, `ActionSequence`, `AggHistoryEntry`, `ResolveSavedObjectsImportErrorsOptions`, `ContextModel`, `templateDataType`, `ExclusiveTestFunction`, `AppSocket`, `TypeCase`, `FunctionProperties`, `ParsedNode`, `Buntstift`, `ColumnData`, `PerpMarketConfig`, `CustomElement`, `MultiChannelCCCommandEncapsulation`, `AppProps`, `Vector2_`, `ResponseFixtures`, `requests.ListGroupsRequest`, `React.SyntheticEvent`, `NoticeProps`, `AssetServiceClient`, `GraphQLModelsRelationsEnums`, `TextStringNoLinebreakContext`, `SubscriptionEmit`, `ParserServices`, `RunOptions`, `PO`, `ExecuteTransactionCommandInput`, `DataPoint`, `StreamActivityModel`, `DaffAuthorizeNetReducerState`, `NavigationPluginStartDependencies`, `DefaultConfig`, `OracleConfig`, `UpdateUserAvatarService`, `builder.IDialogResult`, `AlertTitleProps`, `GraphVertex`, `F`, `TextureFormat`, `WorkflowExecuteMode`, `IXingInfoTag`, `ProjectLock`, `GleeConnection`, `IControllerConfig`, `IBucketDateHistogramAggConfig`, `requests.ListUpcomingScheduledJobsRequest`, `CreateChannelCommandInput`, `BlockDocument`, `DeleteProjectCommandInput`, `AbstractShaderNode`, `CkbMint`, `ListSafetyRulesCommandInput`, `ClientQuery`, `SidebarItem`, `StateNodeConfig`, `PatternMatchKind`, `TreeMeta`, `ConstantSourceNode`, `InMsg`, `ReadableQuery`, `SearchResultComponent`, `CategoricalChartProps`, `CoerceResult`, `RefactoringWithActionProvider`, `MDCDialogPresentationControllerDelegateImpl`, `IStartupInfo`, `IQuaternion`, `BaseRowDef`, `CaseReducer`, `Test`, `PreparedData`, `IAnyVisualization`, `IReminder`, `IosBuildName`, `TransactionAction`, `ITaskDataConnections`, `KeyRange`, `MemBuffer`, `OrderedComparator`, `ThreeEvent`, `DevicePixelRatioMonitor`, `ThyGuiderStep`, `ViewSlot`, `UploaderInputs`, `InputFieldsComponentsDef`, `ParticipantPermission`, `BitstreamFormatRegistryState`, `CstmHasuraCrudPluginConfig`, `Koa.Next`, `CodePrinter`, `ReferenceResolverState`, `UserQueryTask`, `FilterState`, `PanelData`, `WebGLSync`, `ReporterFactory`, `CanvasPath`, `ApiPackage`, `ISuggestionsCollector`, `BaseTransaction`, `ThrowIterable`, `MockDrake`, `MarkerInstanceType`, `LensState`, `VertexElement`, `ModelList`, `DAL.KEY_ESC`, `HsSidebarService`, `Ti18nDocument`, `BatchExecuteStatementCommandInput`, `CacheContent`, `RuleWithId`, `GfxBuffer`, `DiffOptionsNormalized`, `OsqueryAppContext`, `SelectChangeEvent`, `MarkerSet`, `MenuEvent`, `N3`, `TProtocol`, `EmailValidatorAdapter`, `PagerBase`, `IStyleObj`, `RouteData`, `SubstanceEnv`, `CreateProjectDto`, `DbUser`, `MenuStackItem`, `JSXExpressionContainer`, `IndexedCallback`, `OpGraphPath`, `Vec3Sym`, `DeepImmutable`, `ContinueStatement`, `IPointAttribute`, `SimpleSelector`, `UIRouter`, `ClassicComponentClass`, `Dual`, `AppServicePlan`, `ObjectView`, `DeleteValue`, `TreeNodeHTMLElement`, `IconComponentProps`, `UserProfileService`, `ImportFromAsNode`, `UpdateDocumentCommandInput`, `IterableIterator`, `WidgetView`, `FormErrorMessageType`, `requests.ListDatabaseUpgradeHistoryEntriesRequest`, `TestObservableLike`, `PopupStackItem`, `DocumentDeltaAtomicUpdate`, `DeleteDashboardCommandInput`, `DescriptorProto`, `DirectionLight`, `RequestCompleteEvent`, `EntityActionDataServiceError`, `CodegenDesignLanguage`, `Aggregator`, `DeprecationsClient`, `TimestampTrigger`, `SavedObjectsCreateOptions`, `HexcolorInfo`, `NotificationCreateProps`, `PDFForm`, `ClientOrderGoodsInfo`, `ValidationMetadata`, `TopLevel`, `AtomGridmaskImageElement`, `MemberLikeExpression`, `IntFormat`, `DirectoryObjectListResult`, `HTMLDice`, `OrgPass`, `SheetChild`, `BookService`, `IConversionValidation`, `SavedObjectSanitizedDoc`, `TLPointerEventHandler`, `FormStore`, `NextFnType`, `IWorkerArgs`, `XSort`, `TFnWatcher`, `Plugin.SNSAdaptor.Definition`, `EvAgentSession`, `NonEmptyArray`, `SQLVariables`, `ConfiguredPluginResource`, `EditableDocumentData`, `EntityItem`, `ResultType`, `QueryMany`, `IAmazonServerGroupDetailsSectionProps`, `RemovableAnalyserNode`, `EventExclusionPlugin`, `FluentBundle`, `RequestSpan`, `ExecutorState`, `StudioComponentInitializationScript`, `cBgS_GndChk`, `MappedSingleSourceQueryOperation`, `ListKeyManagerModifierKey`, `TemplatesManager`, `GraphQLList`, `Wrapper`, `QueryFilter`, `TagLimitExceededException`, `ClusterNode`, `RadioButtonProps`, `Operand`, `SpatialOctreeNode`, `StorageArea`, `PDFAcroComboBox`, `EntityChangeEvent`, `UberChart`, `SearchOpts`, `BubbleSeriesStyle`, `Operator.fλ`, `DescribeTasksCommandInput`, `Gzip`, `LegendType`, `WebGL1DisjointQueryTimerExtension`, `DialogPosition`, `RpcMessageSubject`, `t.Identifier`, `Ternary`, `StackProps`, `CoreHelpers`, `fhir.Task`, `unist.Node`, `CGSize`, `OfficeApp`, `DatePickerProps`, `BigintIsh`, `EdiElement`, `StepComponent`, `AccountLeague`, `RollupWatcher`, `PBRCustomMaterial`, `jdspec.PacketInfo`, `IMediaQueryCondition`, `ISPField`, `ServiceMetadata`, `BEMHelper`, `DescribeConnectorsCommandInput`, `ColumnPoint`, `AbstractRunner`, `CallHierarchyDataItem`, `IFabricWalletGenerator`, `ast.LetNode`, `SanityClient`, `ReorderAggs`, `RenderData`, `InterviewQuestionSortMap`, `Cubelet`, `TransportRequestOptions`, `SMTConstructorGenCode`, `Kubectl`, `IGetActivitiesStatistics`, `EditPropertyConfig`, `EmbedType`, `IRestApiContext`, `ObserverCallback`, `PerfToolsMutation`, `IModelConfiguration`, `TransferBrowserFftSpeechCommandRecognizer`, `IFocusedCellCoordinates`, `StyledProperties`, `TransformerOptions`, `ItemMap`, `KeyLabel`, `QuickPickItem`, `ColorConfig`, `CronosClient`, `GroupedOrderPremiumRow`, `ISparqlBinding`, `PullRequestViewModel`, `ImmutableCell`, `RoosterCommandBarProps`, `AddTagsOutput`, `Widget`, `CommandModule`, `StatefulCallClient`, `ListAnalyzedResourcesCommandInput`, `BarFile`, `estree.Program`, `TProps`, `UsersResponse`, `QueryOptionNames`, `IExportProvider`, `PointContainer`, `Int32Array`, `ViewRanges`, `ArrayBindingOrAssignmentPattern`, `AnimatedNode`, `AndDeciderInput`, `ColumnsSchema`, `LinkedListChild`, `BlockStackService`, `ConnectionGraphicsItem`, `ScrollSpyService`, `IReactExtension`, `BSplineSurface3d`, `UserGroup`, `SessionSourceControl`, `INestApplication`, `IEntityState`, `ComponentLayoutStyleEnum`, `LayerGroup`, `CompositeFilterDescriptorCollection`, `IExecutionContextContainer`, `SelectSeriesInfo`, `Safe`, `AutoTuneMaintenanceSchedule`, `ODataStructuredTypeParser`, `CancellationReceiverStrategy`, `SSBSource`, `AABB`, `GenericMetricsChart`, `NSMutableDictionary`, `SavedObjectsStartDeps`, `DeployedReplicaCollection`, `ModuleListener`, `PlaneTransformation`, `HRTime`, `ProjectItem`, `TreeBacked`, `BCSV.Bcsv`, `vscode.FileStat`, `HtmlRendererOptions`, `CoverConfiguration`, `next.Group`, `TRecord`, `NodeTypes`, `React.RefCallback`, `GetJobRequest`, `SliderHandle`, `tStringDecimalUnits`, `DeploymentImpl`, `RootVertex`, `ObservableChainQuery`, `Edition`, `ThemeColorDefinition`, `CallFrame`, `UniformPub`, `ContentFilter`, `AnimationAction`, `CacheUpdateEvent`, `DeSerializersT`, `near.NearSwapTransaction`, `CameraMatrix`, `SlashCommandConfig`, `KeyToIndexMap`, `TestEthersProvider`, `VpcConfig`, `IListSelectionConfig`, `GoogleStrategy.Profile`, `jest.SpyInstance`, `NetworkgraphSeries`, `UserFromToken`, `AlertDetails`, `ListDomain`, `Pickle`, `ExtendedCodeAction`, `DebugProtocol.StackTraceArguments`, `CollapsedTransform`, `InitialProperties`, `BodyPartConstant`, `CustomSeriesRenderItemParams`, `NineZoneManager`, `UniswapV2Pair`, `InvalidTagException`, `ts.ObjectType`, `DynamicsContext`, `core.ETHSignTx`, `TransactionApplyContext`, `WithLiteralTypescriptType`, `UserStoreReference`, `ClassVisitor`, `WebSocketTransport`, `IAst`, `ComponentGeneratorOptions`, `SavedObjectFinderUiProps`, `SalesforceFormValues`, `FolderResponse`, `LocaleService`, `CalendarConstants`, `RuleValidator`, `GX.SpotFunction`, `OperationData`, `DebugProtocol.InitializeRequestArguments`, `TileView`, `PredictionContextCache`, `GraphQLEnumType`, `IThrottlingMetrics`, `Technique`, `VirtualNetworkTap`, `DropdownButtonProps`, `MulticallResponse`, `DataSourceSnapshot`, `IQueryListProps`, `KeyUsage`, `SessionStateControllerState`, `RefreshTokenRepository`, `IColumnWrapper`, `AB`, `Graphin`, `TransactionSigner`, `SShapeElement`, `ParameterList`, `CreateConnectionCommand`, `ProjectFile`, `RpcContext`, `AESCipher`, `HumidityControlSetpointCCReport`, `TypeEvaluator`, `ProxyHandler`, `ObjectOptions`, `DescribeUserProfileCommandInput`, `CreditCardView`, `StructureTerminal`, `TypeTemplate`, `EightBittr`, `LegendPositionConfig`, `MapConfig`, `IAppSetting`, `SelectItemDirective`, `ActionFunction`, `Light_t`, `ModalService`, `MotionComponent`, `GradientDataNumber`, `UserPosition`, `FormFields`, `SubSequence`, `DataSourceConfiguration`, `Dockerode.Container`, `InformationPartitionElementProps`, `InterleavedBuffer`, `Responder`, `EmployeeStore`, `Effect`, `EntryControlCCNotification`, `ItemView`, `HookDecorator`, `SubConfig`, `CategorizedPropertyMemberDoc`, `IEntityType`, `TransactionId`, `WebWorker`, `requests.ListVmClusterPatchHistoryEntriesRequest`, `CollectionResult`, `CF.Subscribe`, `ColumnMetaData`, `AnnotationCollection`, `HintContext`, `Metas`, `UpdateStateValueFunc`, `Corner`, `GoldTokenWrapper`, `object`, `ICanvas`, `QConn`, `RegisterOutput`, `VerificationCode`, `CodeFlowReferenceExpressionNode`, `NextStep`, `BackgroundProcessState`, `NumBopType`, `FbBuilderFieldPlugin`, `Events.prekill`, `DestinationsByType`, `ActionsSubject`, `FieldProps`, `Diagnostic`, `ProcessRequestResult`, `TestFunctionImportEdmReturnTypeParameters`, `Patient`, `ICellRendererParams`, `Reflecting`, `DeleteDomainRequest`, `Champions`, `SyntheticErrorLabel`, `ZoneFileObject`, `d.OutputTargetDocsVscode`, `ListOdaInstancesRequest`, `UpdateEntryType`, `ListLeaderboardRecordsRequest`, `StoredFile`, `iDataTypes`, `INixieControlPanel`, `SVGTransform`, `RequestInit`, `CollectionTemplateable`, `Array3`, `SelectionSetNode`, `ExtConfig`, `TButtons`, `InfoPlist`, `PersistentVolumeClaim`, `Uuid`, `BubbleLegendItem.RangesOptions`, `ListNodesCommandInput`, `VisualizeEmbeddableContract`, `C51BaseCompileData`, `SourceMapSpan`, `GenericBinaryHeapPriorityQueue`, `UtilityInfo`, `XRFrameOfReference`, `EventSubscriptionsMessage`, `GasOption`, `GPUTextureFormat`, `InsertResult`, `FigurePart`, `SubsetStory`, `TransferTransaction`, `FileLoader`, `AWSSNSEvent`, `ModalInstance`, `SbbNotificationToastConfig`, `MaxPooling3D`, `ObservedNode`, `SocketAwareEvent`, `InjectorIndexes`, `DeleteDeviceCommandInput`, `CategoryResults`, `Portfolio`, `OpRecInterface`, `TransactionOutput`, `QueueOptions`, `RenderOptionFunction`, `ProcessingJobsMap`, `RouteMatch`, `K1`, `StorageValue`, `ListAttachmentsCommandInput`, `PaymentMethod`, `EmojiData`, `SubstituteOf`, `Indexes`, `PluginDeployerEntry`, `ts.Visitor`, `ProxyType`, `Keyframes`, `PackageContribution`, `ObservableMedia`, `NamedAttrMap`, `AngularFireDatabase`, `ShoppingCartState`, `ListDashboardsCommandInput`, `VaultItemID`, `IMinemeldCandidateConfigNode`, `EnvironmentProps`, `DisplayStyleProps`, `JGOFIntersection`, `DocUrl`, `TagValueType`, `NodeIdLike`, `AnimationEntry`, `BooleanLiteralExpr`, `SudokuBoard`, `FormFieldsProps`, `DecoratorType`, `Nodes`, `ts.MethodSignature`, `SPClientTemplates.RenderContext_Form`, `PUPPET.payloads.Room`, `DidDocument`, `StackEvent`, `Type_AnyPointer`, `RouterOptions`, `SubtitlesTrackId`, `LTypeResolver`, `AthenaClient`, `ParsedSite`, `NodeInstructure`, `NativeCallSyntax`, `CognitoUser`, `SchemaValidationContext`, `SsgRoute`, `FormApi`, `HintsConfigObject`, `angular.IIntervalService`, `SourceBuffer`, `RequestSet`, `R3`, `ShaderSlot`, `x.ec2.Vpc`, `RulesMap`, `StreamType`, `d.PrerenderUrlRequest`, `ICreateCommitParams`, `requests.ListExportSetsRequest`, `HttpResponseEncoding`, `languages.Language`, `Plural`, `Tense`, `BlockDefinitionCompiler`, `SimpleList`, `MockActivatedRoute`, `TimelineOptions`, `ConnectedSpace`, `Iam`, `EmbeddableInput`, `SeriesZonesOptions`, `DistanceExpression`, `ECB`, `vscode.Command`, `V1CommandInputParameterModel`, `ListSwipeAction`, `IContainerRuntimeOptions`, `PrebootDeps`, `IFilterOptions`, `XMLSerializer`, `TransactionGenerationAttempt`, `DeserializeWire`, `LicensingPluginSetup`, `LoadingProps`, `ControllerClient`, `QuickInputStep`, `RequestController`, `UpdatePublicData`, `Chord`, `SavedObjectMigrationContext`, `IGLTFRuntime`, `EngineArgs.CreateMigrationInput`, `BluetoothDevice`, `DeleteBotAliasCommandInput`, `CommandLineArguments`, `TextCanvasLayer`, `SummaryItem`, `ProgressBarData`, `ListTagsForResourceRequest`, `GetPolicyResponse`, `DefinitelyTypedTypeRun`, `GroupedFunnel`, `HistoryState`, `PaginationComponent`, `TextureUsage`, `models.NetFramework`, `DefaultEditorDataTabProps`, `VIS0`, `Freeze`, `ToastsApi`, `GlobalPositionStrategy`, `ContextMember`, `IPatch`, `ActionKey`, `OrderPremiumRow`, `RuntimeOptions`, `Torrent`, `PlaylistTrack`, `SObjectTransformer`, `BlogTag`, `AnyImportSyntax`, `StateService`, `SF`, `LinkedDashboardProps`, `KeyPair`, `NetworkManager`, `CustomPropertyHandler`, `$E.IBaseEdge`, `TydomController`, `ISyncedState`, `minimist.ParsedArgs`, `ListUsersCommandInput`, `KeyframeTrackType`, `bitcoinish.BitcoinishPaymentTx`, `AutoCompleteLabel`, `DashEncryption`, `Int32List`, `QueryDeploymentResponse`, `ListRepositoriesReadModel`, `Viewer`, `BluetoothRemoteGATTCharacteristic`, `UpdateOptions`, `KeySchema`, `IMappingsState`, `ControlElement`, `PackedBubbleLayout`, `AWSSNSRecordItem`, `IStats`, `ProjectMap`, `ClipboardEvent`, `ProfilerFrame`, `Yarguments`, `IntrospectionResult`, `FileId`, `DeleteWebACLCommandInput`, `SortClause`, `SubdomainAndZoneId`, `HttpParams`, `MetamaskState`, `MultigraphRequestOptions`, `RTCIceCandidateJSON`, `BoxPlotData`, `ExpressionTypeDefinition`, `IErrorCallback`, `ASNDBS`, `PredictableHook`, `anchor.BN`, `BubbleLegendItem.Options`, `PubEntry`, `imperative.IProfileLoaded`, `FilterOptionOption`, `MatSortable`, `V1Prometheus`, `LocalParticipant`, `ProcessedPackageConfig`, `p5exClass`, `GraphObject`, `IntraDayDataSourceType`, `d.EventSpy`, `ColorSpace`, `CallHierarchyOutgoingCallsParams`, `TestScheduler`, `MockStateContext`, `AnyConfigurationModel`, `CanvasEditorRenderer`, `SourceLocation`, `ArmParameters`, `IndexTree`, `IntlMessages`, `Responses.IListContentItemsResponse`, `GeoPointLike`, `BuiltinFrameworkMetadata`, `TestGraphic`, `Apply1`, `QExtension`, `ShellResult`, `OnReferenceInvalidatedEvent`, `ResourceRecord`, `PromiseResolver`, `Outlet`, `CreateStageCommandInput`, `HomebridgeConfig`, `AnimatableColor`, `SmsHandler`, `RespostaModel`, `PostType`, `IMergeNode`, `TodoType`, `RepositionScrollStrategyConfig`, `ArrayBinding`, `React.MutableRefObject`, `SyncEvent`, `IMYukkuriVoice`, `F.Function`, `TiledSquareObject`, `BaseUIManager`, `FilterGroupKey`, `ServerInfo`, `AnySchemeForm`, `SpacesClientService`, `MockedDataStore`, `IModelHubClientError`, `QueriesStore`, `StartStopContinue`, `ProfileRecord`, `u`, `TokenClaims`, `ExternalProject`, `RectResponderModel`, `IWholeSummaryPayload`, `MongoConnection`, `TemplateInput`, `WriteTournamentRecordRequest`, `SignatureHelp`, `IdentifierInfo`, `TopicId`, `RPC.IWatchResponse`, `KubernetesService`, `GlyphVertices`, `BundleItem`, `SuiteInfo`, `ContentCache`, `Encoder`, `AnimGroupData_Shape`, `AxisTitleOptions`, `FormFieldMetadataValueObject`, `LogsData`, `Employee`, `DataGatewayService`, `$NextFunctionVer`, `TimelineViewWrapper`, `ISO`, `ModuleCode`, `ArgPathOrRolesOrOpt`, `ConfigProviderProps`, `JRPCEngineNextCallback`, `cookie.CookieSerializeOptions`, `Vote`, `ClipRenderContext`, `TRef`, `Json.ObjectValue`, `IApiKubernetesResource`, `DatabaseService`, `ContrastColors`, `IonicModalController`, `EffectCallback`, `WebGLQuery`, `ObservationService`, `IMatrixEventProcessorResult`, `IThyDropContainerDirective`, `CreateProps`, `AWSError`, `BufferStream`, `SqipImageMetadata`, `InvalidInput`, `AxisGeometry`, `EditorChangeEvent`, `UserMessage`, `ICXListHTLCOptions`, `Show`, `AnchorPosition`, `ShippingMethod`, `ButtonComponent`, `IAmazonServerGroup`, `ContainerConfig`, `RigidBodyComponent`, `DisplayNameChangedListener`, `ElectronCertificate`, `DoOnStreamFns`, `StorageQuotaExceededFault`, `TypographyProps`, `GetMessageKeys`, `DDL2.IField`, `ts.ConstructorDeclaration`, `Package.ResolvedPackage`, `GetConfigOptions`, `ColumnAnimation`, `ArenaNodeText`, `Bezier`, `MsgCloseGroup`, `KameletModel`, `ToneEvent`, `GraphQLDirective`, `AzExtTreeDataProvider`, `ElementSourceAnalysis`, `Algorithm`, `AaiMessageTraitDefinition`, `NotificationDataFilled`, `Plane3dByOriginAndVectors`, `AliasOptions`, `FormInterface`, `CommentType`, `VaultID`, `coreClient.OperationArguments`, `IEditorProps`, `GetReadinessCheckResourceStatusCommandInput`, `InruptError`, `Src`, `VisualizeEditorCommonProps`, `ViewModel_`, `requests.ListBdsInstancesRequest`, `DocumentValidationsResult`, `Load`, `NullAndEmptyHeadersServerCommandInput`, `LivelinessMode`, `LightType`, `ExternalDMMF.Mappings`, `GoalItemViewModel`, `DebugConsole`, `IAdjacencyBonus`, `PoiGeometry`, `CssBlockAst`, `AnyValue`, `PlaylistModel`, `PureComputed`, `TSTypeParameterInstantiation`, `DeleteDeploymentCommandInput`, `IVectorStyle`, `TeamMember`, `BaseMessage`, `Adb`, `CountParams`, `IGlobalManager`, `NgWidget`, `ListPoliciesCommandInput`, `Roadview`, `IDBIndex`, `ODataQueryArgumentsOptions`, `Tween`, `SetSelectionMenuDelegate`, `IDBValidKey`, `TextMarker`, `DocumentRangeFormattingParams`, `HeaderStreamManipulator`, `TGroupHandle`, `QueryHook`, `WithoutSheetInstance`, `TypeReference`, `GfxInputLayoutDescriptor`, `NormalizedModule`, `CreateRequest`, `IAppContainer`, `HDWallet`, `ToggleGroupProps`, `Uint256`, `PrerenderContext`, `RootBank`, `ReducersMapObject`, `Visit`, `BasicReflectionEvent`, `TimeFormatter`, `QueryJoin`, `Runtime.Port`, `SilxStyle`, `MultiChannelAssociationCCReport`, `SqrlInstance`, `IContainerNode`, `IEmbedConfigurationBase`, `XPCOM.nsIFile`, `TLSSocket`, `ChangeFn`, `Knowledge`, `UserType`, `Reservation`, `IFavoriteColors`, `HTMLScLegendRowElement`, `IComputedFieldOwner`, `PinejsClient`, `CompilerCtx`, `OpenApiRequestBuilder`, `SourceMapGenerator`, `AuthContextType`, `DocumentStore`, `NavigationDescriptor`, `ValidatedPassword`, `Parjser`, `NodeSSH`, `AlterTableAddColumnBuilder`, `RoomModel`, `AutocompleteItem`, `AccessorConfig`, `CachePage`, `TFunction`, `SerialOptions`, `DecorationOptions`, `Session`, `RepositoryStatistics`, `ReadonlyNFA`, `Let`, `ParsedResults`, `common.WaiterConfiguration`, `SingleSpaAngularOptions`, `ResolveModuleIdOptions`, `GoogleProvider`, `vd.VNode`, `QueryKey`, `ITypeUnion`, `DescribeDashboardCommandInput`, `AuthedRequest`, `Advice`, `AnyRawModel`, `ThrottleSettings`, `MIRTypeOption`, `TypeMap`, `EngineResults.EvaluateDataLossOutput`, `AccountingService`, `AthenaExecutionResult`, `StepInfo`, `DeploymentResult`, `PatternLike`, `d.OutputTargetDocsCustom`, `inferHandlerInput`, `DeleteLifecyclePolicyCommandInput`, `OrderedRotationAngles`, `IToolbarAction`, `MetadataProvider`, `Slab`, `PymStub`, `requests.ListBackupsRequest`, `ITextDiffData`, `WholeHalfUnison`, `StringTypes`, `CommandsSet`, `ParserProduction`, `TypedEvent`, `ArrayCollection`, `Collectible`, `DaffNewsletterSubmission`, `IApprovalPolicy`, `_GlobalJSONStorage`, `UINavigationBar`, `BatchCreateAttendeeCommandInput`, `SVErrorLevel`, `CSSObject`, `IPuppet`, `DirFileNameSelection`, `RequestMock`, `ContentModel`, `DataViewValueColumn`, `UseRefetchOptions`, `PhaseModel`, `DtlsContext`, `IsTenantAvailableInput`, `DataFetcherOptions`, `TEventHandler`, `NotificationID`, `ReplExpect.AppAndCount`, `LazyService`, `PacketInfo`, `IFilePropertiesObject`, `TypeMetadata`, `GitBlameCommit`, `CoreFeature`, `SerializedData`, `CustomType`, `JQueryMouseEventObject`, `GenerateMappingData`, `QRPolynomial`, `stream.Readable`, `Slot`, `LiveExample`, `Define`, `AttributeKeyAndValue`, `ContractInterface`, `TestSuiteInfo`, `d3Geo.GeoRawProjection`, `CharGroup`, `AppUserCard`, `CopyResponse`, `CredentialsOverwritesClass`, `GeneratorFile`, `IndexerManagementResolverContext`, `IErrorPayload`, `RuleEngine`, `FileMode`, `messages.Background`, `PartnerActions`, `UserFacade`, `DocumentLinkShareState`, `ParameterConfig`, `NodeChanges`, `BenzeneGraphQLArgs`, `RelayServiceConnectionEntity`, `TestPlayer`, `AnalyzeCommentResponse`, `MessageAttachment`, `Permissions`, `IncludeMap`, `InputMode`, `ModuleBlock`, `point`, `TPropertyTypeNames`, `RosApiCommands`, `Fiber`, `ContinuationData`, `HashMapIteratorLocationTracker`, `DialogItem`, `ExpoAppManifest`, `TraitNode`, `NormalCollection`, `GenericRequestHandlerChain`, `VerifyEmailAccountsValidationResult`, `MappedTypeNode`, `ScopedCookieSessionStorage`, `TValue`, `RedBlackNode`, `MountedHttpHandler`, `SourceFileLike`, `EnhancedGitHubNotification`, `Apollo.SubscriptionHookOptions`, `ActivitySettings`, `FlowView`, `PickResult`, `CesiumLayer`, `WebCryptoFunctionService`, `PDFAcroField`, `NavigationAction`, `AfterGroupCallback`, `IHTMLCollection`, `InternalTakomoProjectConfig`, `MIRInvokeDecl`, `requests.ListPdbConversionHistoryEntriesRequest`, `DescribeCertificateAuthorityAuditReportCommandInput`, `express.Router`, `GetSharedData`, `LogFileParsingState`, `ActivityHeight`, `PackTypeDefinition`, `TagResourceRequest`, `TypeGenerics`, `WalletInitializationBuilder`, `IContentSearchRequest`, `CodegenContext`, `BabelConfigOptions`, `DataInterface`, `ObservableValue`, `Trees`, `XMessageBoxAction`, `DependencyStatus`, `UsersOrganizationsService`, `DependencyInfo`, `React.FormEvent`, `GenericOperation`, `AppApp`, `ListQueuesCommandInput`, `LogFormula`, `Appservice`, `UInt8`, `JQuery.TriggeredEvent`, `IdentifierType`, `CreateResponse`, `Facets`, `ViewerNetworkEventStarted`, `FormEventDetail`, `PDFCatalog`, `ItemElement`, `Blueprint`, `requests.ListKeyVersionsRequest`, `TEDirective`, `RouteProps`, `GeometryType`, `GlobStats`, `QueryLeasesRequest`, `HandlerParamMetadata`, `InputEventMouseButton`, `ICommandManager`, `Fn3`, `Line`, `GetByIdAccountsValidationResult`, `MarkerBase`, `Postable`, `UpdateTableCommandInput`, `InsertChange`, `BackendWasm`, `TestSchemaProcessor`, `PiConceptProperty`, `PeriodData`, `Packet`, `CSSParsedDeclaration`, `INamedObjectSchemaProperty`, `AutowiredOptions`, `SerializedNode`, `GPGPUBinary`, `TypedDocumentNode`, `GraphQLQueryGenerator`, `ObjectValueNode`, `IAuthConfig`, `OfficialAccount`, `Segno`, `Embed`, `AddressBookEntry`, `PipetteNameSpecs`, `StructureMap`, `RedspotArguments`, `PopulatedContent`, `CrochetValue`, `AaiDocument`, `DatabaseInstanceHomeMetricsDefinition`, `InjectorClient`, `NgextConfigResolved`, `NavigationButton`, `ListRecommendationFeedbackCommandInput`, `OciError`, `DataGraph`, `GlobalChartState`, `SubsetCheckResult`, `ClusterClient`, `Radians`, `WebviewTag`, `SendEventCommandInput`, `Prize`, `SimpleInputParamsCommandInput`, `WU`, `QuickInfo`, `Geoposition`, `By2`, `ColumnHeaderOptions`, `ProgressModel`, `CaretPosition`, `QueryImpl`, `JsonEnumsCommandInput`, `fakeDevice.Device`, `FoundationType`, `TransactionMeta`, `UserTimeline`, `ServerTransferStateTranslateLoader`, `EmergencyCoordinates`, `GraphCalculator`, `ITreeDataProvider`, `SecurityService`, `Equaler`, `PieceModel`, `TopNavItem`, `IBackendRequestData`, `yubo.WaveOptions`, `IManualTimeInput`, `DecoratorData`, `UiThread`, `UploaderEnvs`, `BaseModule`, `ImmutablePerson`, `BasicAuthResult`, `td.AppLogger`, `ICoreService`, `PlatformRepositoryService`, `RelativeInjectorLocation`, `IStateProps`, `DeclarationBlockConfig`, `Draggable`, `SheetSpec`, `ProtoJson`, `Finality`, `Information`, `XCommentNode`, `requests.DeleteConnectionRequest`, `IFeatureOrganization`, `PropsWithAs`, `A.App`, `OutgoingHttpHeaders`, `WatchSource`, `IDBOpenDBRequest`, `IResizeState`, `RouteFilterRule`, `On`, `MiddlewareFunction`, `FileAsset`, `ServerArgs`, `TimePickerBase`, `AggConfigs`, `CipherCollectionsRequest`, `UserController`, `SocketIO.Socket`, `ModelSummary`, `ng.IAttributes`, `requests.ListBdsMetastoreConfigurationsRequest`, `ERROR_CODES`, `Fixer`, `ShaderAttributes`, `NextLink`, `AutorestExtensionHost`, `MiniSimulation3D`, `DefaultClause`, `RollupStateMachine`, `CISKubeBenchReport`, `MasterKeySecret`, `Pseudo`, `DateTimeData`, `BaseHandlerCommonOptions`, `AzureComponentConfig`, `IGameState`, `Cat`, `SceneGraphComponent`, `Launch`, `DataLabelOptions`, `Katana`, `ThemeObject`, `VariableParserAST`, `AttributionData`, `Pickability`, `webpack.compilation.Compilation`, `HumanAddr`, `OrganizationalUnitConfig`, `BarService`, `FastifyPluginCallback`, `FirebaseTools`, `ListRowProps`, `SignatureHelpParameter`, `Server`, `CreateDeviceDTO`, `GetCertificateCommandInput`, `DappKitRequestMeta`, `ListEventsRequest`, `RecordDef`, `TableInstance`, `DropTargetMonitor`, `DeleteResult`, `FabricWallet`, `Tick`, `Accelerometer`, `IVec3Term`, `IObservable`, `ActionCodeSettings`, `ElectionMetadata`, `MapMouseEvent`, `SearchRequest`, `LocaleRecord`, `MarkdownProps`, `PropTypeConfig`, `LinkingCrossPlatform`, `JPA.JPAResourceData`, `logging.Level`, `ActionParams`, `IExportedValue`, `ArticleModel`, `ResetDBClusterParameterGroupCommandInput`, `ast.EscapeNode`, `Beacon`, `IMapping`, `OperationDescriptor`, `WetMessage`, `Delete`, `ConfigStorage`, `ShapeOptions`, `ts.BooleanLiteral`, `AssetService`, `BleepsSettings`, `SliderComponent`, `FrameData`, `DomainName`, `EditionsList`, `SimplePubSub`, `ErrorContinuation`, `ImportResolver`, `UnicornInfo`, `AlertInstance`, `d.Workbox`, `AddGroupUsersRequest`, `EventRepository`, `NodeCallback`, `FloatingPanel`, `Reduction`, `ResponsiveQueryContextType`, `EdaColumn`, `ToolchainName`, `TSForIn`, `Editors`, `GradientAngularVelocity`, `SingleResponseModel`, `THREE.Texture`, `VideoCapture`, `InheritanceChain`, `All`, `AtomChevronElement`, `UpdateSubnetGroupCommandInput`, `EnumEntry`, `ScreenReaderSummaryStateProps`, `ParserError`, `OnChildElementIdArg`, `ModifyRelationOptions`, `RoutesMeta`, `PackageChangelog`, `OperationPath`, `Easing`, `IHydrateResult`, `ReaderMiddleware`, `FormValues`, `d.CollectionCompilerMeta`, `VpnClientIPsecParameters`, `TiledMapResource`, `ConstantNode`, `requests.ListNetworkSecurityGroupSecurityRulesRequest`, `GenericThemes`, `SearchStrategyProvider`, `XYAndZ`, `PanelPlugin`, `NodeJS.Timer`, `SflibInstrumentMeta`, `IModelDb`, `FeatureSet`, `ErrorRequestHandler`, `Prog`, `ERC721ContractDetailed`, `CategoricalChartState`, `ScrollBehavior`, `GraphQLResponse`, `IAddMemberContext`, `ClientHello`, `YesNoLimitedUnknown`, `CustomQuery`, `These`, `Top`, `ColumnMetricsObject`, `StorageType`, `IUserInfo`, `PreprocessCollector`, `BitcoinAPI`, `GPUBuffer`, `TagResourceInput`, `SymbolTable`, `CtrOr`, `FlexibleConnectedPositionStrategyOrigin`, `TParams`, `IEntityAction`, `UInt64Value`, `BranchNode`, `FilePreviewModel`, `RuleFn`, `Children`, `lf.query.Select`, `IWatchCallback`, `CombatantInfoEvent`, `InferableComponentEnhancer`, `DatabaseEventBus`, `BlobPart`, `ICreateTableOptions`, `ISearchResponse`, `LocalRepositoryService`, `AnyToVoidFnSignature`, `IModalListInDto`, `CollapsableSidebarContainerState`, `CurrencyFormat`, `CardHeaderProps`, `UrlTree`, `RenderBox`, `IMarker`, `Derivative`, `JGOFMove`, `http.IncomingHttpHeaders`, `MatchedMention`, `ReadableAtom`, `SinonStub`, `CoreAPI`, `TopicsService`, `ViewConverter`, `SearchBar`, `ShallowWrapper`, `ESTree.CallExpression`, `DayResults`, `InputModel`, `PageG2Type`, `IOrganizationRecurringExpenseFindInput`, `PivotAggsConfig`, `TreeBranch`, `StepExpr`, `FileSystemProviderWithFileReadWriteCapability`, `CommandLineStringListParameter`, `Core`, `ISagaModule`, `INavNodeFolderTransform`, `CmsEditorFieldTypePlugin`, `TRejector`, `PipelineDescriptor`, `LibSdbTypes.Contract`, `VertexEntry`, `SpatialAudioSeat`, `SqsMetricChange`, `StoryApi`, `ts.Node`, `IIMenuState`, `ScanGameFile`, `BMP24`, `MIPS.Register`, `IOpenSearchDashboardsMigrator`, `OperationsListOptionalParams`, `CdtFrontElement`, `CodeError`, `Await`, `DataKey`, `UberToggleState`, `CancelToken`, `StatsTree`, `Msg`, `CreateChannelResponse`, `DTMock`, `PhotoData.PhotoDataStructure`, `CSVDataImporter`, `FileWatcherEventType`, `http.ServerResponse`, `Linter.Config`, `EntryProps`, `IJob`, `BlockGroup`, `LoginInput`, `CfnIntegration`, `AsyncQueue`, `PlatformConfig`, `TicTacToeAppState`, `ResolvedPos`, `DeleteDBSubnetGroupCommandInput`, `CustomCallAst`, `ProxyNode`, `requests.ListExadataInfrastructuresRequest`, `AsyncQueryable`, `DescribeParametersCommandInput`, `ECA`, `RNN`, `PreCheckerClient`, `AspectRatioType`, `InputEventType`, `ExpressionAstExpression`, `PiClassifier`, `ReportingUser`, `GoogleMeetSegmentationOperationParams`, `IBlockHeader`, `FungiblePostCondition`, `TimelineElement`, `LocalDataProvider`, `BoundFrustum`, `TooltipInitialState`, `TreeEntry`, `OptionFC`, `Events.hidden`, `TexturePalette`, `PopupService`, `FileStorageOption`, `SentinelType`, `SpectatorFactory`, `ExpandResult`, `AnyConstructor`, `TileDoc`, `SinkBehavior`, `HttpClientRequestConfig`, `BookModel`, `CrawlerDomain`, `DeploymentParams`, `Subgraph`, `QueryDetails`, `requests.GetAllDrgAttachmentsRequest`, `GraphQLInputFieldConfigMap`, `PendingQueryItem`, `JConfiguration`, `DistinctValuesRpcRequestOptions`, `EstimateGasOptions`, `ChangeUserLanguageDto`, `ProcessMainAdapter`, `IUrl`, `PutAppInstanceRetentionSettingsCommandInput`, `UserIdDTO`, `puppeteer.ClickOptions`, `Pose2DMap`, `LazyBundleRuntimeData`, `DescribeFlowCommandInput`, `ConnectionInfo`, `Quota`, `IChangeInfoHash`, `IOptionsService`, `TwingNode`, `ExpNumUop`, `FillerHook`, `SimplifiedParameterType`, `mongoListener`, `OpenAPI.HttpOperation`, `TFieldName`, `DOMWidgetView`, `DynamoDBClient`, `EventHub`, `ListingType`, `RaycasterService`, `SSRContext`, `UpdateEntry`, `Importer`, `ForestNode`, `FormRowModel`, `ParticleEmitterWrapper`, `PanelState`, `PlatformInformation`, `LayerVariable`, `FileData`, `AreaService`, `UpdateThemeCommandInput`, `FileWatcherProvider`, `PvsContextDescriptor`, `VersionHistory`, `TelemetryContext`, `TableProps`, `CompoundStatementContext`, `Nonce`, `Fun`, `Markets`, `ComponentTypeTree`, `NonFungibleConditionCode`, `TagCreator`, `requests.ListFindingsRequest`, `IOutput`, `ToastProvider`, `WalletContextState`, `SectionConfig`, `IInteraction`, `AllQueryStringTypesCommandInput`, `TypedArrayConstructor`, `OrganizationModel`, `EntityActionPayload`, `IQueryBus`, `PageSort`, `ApplicationContract`, `CalendarView`, `NeuralNetwork`, `FeatureNode`, `GraphModel`, `ServerCancellationToken`, `ShaderMaterial`, `MainWindow`, `QueryEngineConfig`, `Relay`, `InstructionWithText`, `FormatMetadata`, `BuildifierConfiguration`, `GPUShaderModule`, `J3DModelInstance`, `IPlotState`, `PrivateKeyPEM`, `Keys`, `LoaderResource`, `TransitionSpiral3d`, `MatListOption`, `NotebookCell`, `IntrospectionEngine`, `SessionContext`, `coreRestPipeline.RequestBodyType`, `IVocabularyItem`, `NzSelectItemInterface`, `IRemovalInfo`, `BasePacket`, `ConnectedAccount`, `DescribeDBEngineVersionsCommandInput`, `StripePaymentListener`, `ExtendableBox`, `Exclude`, `JQuery.ClickEvent`, `BlockModel.Resolved`, `JsonSchemaRootAssertion`, `grpc.Request`, `AsyncArray`, `ConnectionSummary`, `CompiledPredicate`, `DateSpan`, `Interior`, `JRPCEngineReturnHandler`, `IsSelectableField`, `StoreTypes`, `TextLiteralContext`, `MetadataCacheResult`, `ExpressionsService`, `OpMapper`, `ConfigRepository`, `ResolverInput`, `XTermMessage`, `IHookStateInitAction`, `T.RenderFunction`, `ActivityRequestData`, `ResolverFactory`, `BezierPoints`, `PerformRenameArgs`, `DescribeClusterCommandInput`, `FieldValidator`, `TrialVisit`, `ITab`, `IVerificationGeneratorDependencies`, `IsDeletedEventPipe`, `AtlasResourceItem`, `MangoGroup`, `EosioContractRow`, `Verifiable`, `NowBuildError`, `FinderPattern`, `DecompositionResult`, `PerformReadArgs`, `RegisterInstanceCommandInput`, `ObjectTypeMetadata`, `AvailableFilter`, `RecursiveShapesCommandInput`, `TwingSource`, `DaffCartFacade`, `TootDetailState`, `IRadioGroupProps`, `Swarm`, `KBService`, `PositionComponent`, `JSONIngestionEvent`, `JsonFormsCore`, `GetPredicateParams`, `ValidateEvent`, `StorableUser`, `Sampler`, `TxCreate2Transfer`, `InputTokenObject`, `OvSettingsModel`, `GX.DistAttnFunction`, `BuildConfig`, `FileNode`, `KeywordErrorDefinition`, `WorkspaceEdit`, `InputOnChangeData`, `NativeScriptPager`, `DeviceDetectorService`, `CustomPropertyDecorator`, `HttpsProxyAgent`, `Arc`, `NodeProps`, `GridProps`, `Approval`, `SpawnOptionsWithoutStdio`, `HashCounter`, `RatioMetric`, `VerificationRule`, `TagEntity`, `NSString`, `UID`, `CompilerSystem`, `ARCommonNode`, `UnderlyingAsset`, `PartialMessageState`, `TheSagaStepfunctionSingleTableStack`, `ValidatorSpec`, `AuditLogger`, `FunctionEntity`, `BreadcrumbProps`, `moq.IMock`, `AppStatusStore`, `InvertedIndex`, `SpriteAssetPub`, `ByteSizeValue`, `BaseChartisan`, `Timesheet`, `PenroseState`, `UserConfiguredActionConnector`, `AuthService`, `ClientTipsData`, `GeoNode`, `NotificationType0`, `DayPickerProps`, `SignerWithAddress`, `D3_SELECTION`, `ClassSelector`, `ReplyMsgType`, `TensorBuffer`, `QueryCallbacksFor`, `AlertContentProps`, `TagResourceResult`, `AutoBounds`, `Conflict`, `TripleObject`, `IdOrSym`, `CLI_OPTS`, `TorrentState`, `WIPLWebpackTestCompiler`, `P2WPKHTransactionBuilder`, `PreviewState`, `ItemInfo`, `HostRule`, `EcsEvent`, `UpdateExpressionDefinitionChain`, `wjson.MetricWidgetAnnotationsJson`, `CopyAsOrgModeOptions`, `RegName`, `React.KeyboardEvent`, `VNodeThunk`, `MatchExpr`, `GenericCall`, `StatusIndicatorProps`, `TLayoutSize`, `GeoPosition`, `createStore.MockStore`, `ActionHandlerRegistry`, `ResolvedType`, `requests.ListTargetDatabasesRequest`, `Drop`, `TableItem`, `EncodedMessage`, `AmqpConnectionManager`, `connection`, `HubstaffService`, `CoreFeatures`, `IonicApp`, `GeneratorContext`, `d.CompilerFileWatcherCallback`, `ListOrganizationAdminAccountsCommandInput`, `OAuthToken`, `CreateGroupRequest`, `Func0`, `OneOrMany`, `P.Parser`, `RadixParticleGroup`, `ToggleComponent`, `Type`, `CredentialRepresentation`, `JSONSchemaAttributes`, `DiscoveryService`, `TargetLanguage`, `HeftConfiguration`, `ABLTempTable`, `BoundsOctreeNode`, `GF`, `CashScriptVisitor`, `DocumentationLink`, `CoverageFlatFragment`, `MeshBuffers`, `UploadTask`, `PlayState`, `SpaceUser`, `DataServiceConfig`, `Signale`, `GraphSignedTransferAppState`, `Types.Config`, `PiContainerDescriptor`, `Minimum`, `CanvasTheme`, `ShowProjectService`, `TopicStatus`, `InputFile`, `TrackedEither`, `ts.SwitchStatement`, `IDBFactory`, `ProgressCallback`, `Weave`, `RelationalOperatorConfig`, `HTTPError`, `IRoute.IParameter`, `Matrix3d`, `AbortedCallback`, `HsConfig`, `IFetchedData`, `BlockAttributes`, `Commitment`, `TlsConfig`, `SidenavMenu`, `AppOptions`, `PluginHost`, `JSONEditorSchema`, `CreateProjectCommandOutput`, `PartyMatchmakerRemove`, `LeaderboardRecord`, `BasePath`, `Flap`, `ModuleResolutionState`, `Graphics.BlendOperation`, `VirtualNode`, `TClass`, `AttributeValueChoiceOption`, `ExpShapeSlice`, `IndexOp`, `ListUserGroupsRequest`, `TimePointIndex`, `Classes`, `Y.Doc`, `Styles`, `AzureFileHandler`, `PluginInstance`, `MarketInfo`, `ClarityAbi`, `CheckboxState`, `TextNode`, `MockPointOptions`, `ChipColor`, `INetworkInfoFeatureDependency`, `WetAppBridge`, `IncomingForm`, `ODataQuery`, `IRuleSpecObj`, `ISkill`, `UnwrapRef`, `FileRecord`, `TemplateParam`, `DaffCategoryFilterEqualToggleRequest`, `fixtures.Fixtures`, `SfdxFalconErrorRenderOptions`, `L.Property`, `UseLiveRegionConfig`, `VariantCurveExtendParameter`, `AppGachaItem`, `InversionFix`, `ModelPlayer`, `IMapState`, `TextElementFilter`, `DaffCategoryFilterRequest`, `MetadataScanner`, `RegSuitCore`, `ast.Name`, `ScalarParamNameContext`, `EventHandler`, `StreamHead`, `IHistory`, `CubieCube`, `TypedArrays`, `ProblemDimension`, `ISettingRegistry`, `MicrosoftDevTestLabLabsResources`, `FormatFlags`, `zowe.IUploadOptions`, `WorkRequestSummary`, `BasicAcceptedElems`, `CreateDomainResponse`, `IWorkspace`, `TestElementProps`, `ParseParams`, `FibaroVenetianBlindCCGet`, `CaseUserActionsResponse`, `UnitOfMeasurement`, `IRichPropertyValue`, `PublicTransition`, `PluginPositionFn`, `ActionTreeItem`, `CustomerState`, `t.STStyle`, `StackedRNNCellsArgs`, `InputMessage`, `ParsedRepo`, `ListSecurityProfilesCommandInput`, `ServerRequest`, `ListEnvironmentsCommandOutput`, `NameObj`, `IconName`, `CoverageMap`, `SharedFunctionCollection`, `OptionDefinition`, `CpuRegister.Code`, `MessageHashService`, `Blockly.Workspace`, `InjectableMetadata`, `ResourceValue`, `Mesh2D`, `EuiBasicTableColumn`, `IToolbarDialogAddonHandler`, `DescribeResourceCommandInput`, `ThyAbstractOverlayPosition`, `PostprocessSetOptions`, `StackInspector`, `ModuleNameAndType`, `EllipsisNode`, `Protobuf.Type`, `EntityLoaderOptions`, `DocumentWatcher`, `BrowserState`, `float`, `Outputs`, `MapAnchor`, `DescribeAddressesCommandInput`, `LinesChangeEvent`, `Percentage`, `Pause`, `DocumentOnTypeFormattingParams`, `IDBPTransaction`, `InterfaceCombinator`, `SerializedFieldFormat`, `NodeCollection`, `DropdownListItem`, `ProblemLocation`, `LexerContext`, `BlockEntity`, `NotificationPayload`, `LiveAnnouncer`, `Calendar_Contracts.IEventCategory`, `HandlerArgs`, `RelativePattern`, `WriteStream`, `INotesGetByContactState`, `Done`, `CesiumService`, `RTCSessionDescription`, `MethodAbi`, `OptimizeJsResult`, `LocationHelper`, `CtrAnd`, `IQuizQuestion`, `SelectionModelConfig`, `PluginService`, `ISpacesClient`, `ChartComposition`, `QuerySolution`, `FormatterProps`, `RichRemoteProvider`, `CachedItem`, `ContentEditableEvent`, `HostSchema`, `UISchemaElement`, `ESLintNode`, `MDCRipple`, `IInput`, `GfxBindingsDescriptor`, `ExtendedPoint`, `FilterData`, `RpcMessagePort`, `SoloOptions`, `RnnStepFunction`, `DeployBundle`, `AnyBuildOrder`, `GlobalModelState`, `TComponentConfig`, `AggregateColumnModel`, `FormSchema`, `RawSavedDashboardPanel640To720`, `ListManagementAgentsRequest`, `Schemas`, `ParameterNode`, `RenderContainer`, `ISPListItems`, `IActivity`, `types.IDBRow`, `NightwatchBrowser`, `ConfigModule`, `ServiceHealthStatus`, `DataTransferItemList`, `Conv3D`, `InAppBrowserObject`, `IMenuItemInfo`, `CapabilitiesResolver`, `GroupEntity`, `ResolveContext`, `TapoDeviceKey`, `CommonTypes.NotificationTypes.LastKnown`, `ThyGuiderManager`, `RoomService`, `SetupApp`, `WishlistState`, `ExpString`, `ExportCollector`, `SystemHealth`, `BindingForm`, `Ng2SmartTableComponent`, `DictionaryNode`, `GroupItemDef`, `TimerService`, `CopyDBParameterGroupCommandInput`, `DataTable.Row`, `WithUserAuthOptions`, `PaymentService`, `RequestorHelper`, `DeleteSecurityConfigurationCommandInput`, `THREE.Quaternion`, `LocaleOptions`, `IEventSource`, `requests.ListDbSystemShapesRequest`, `ColumnsType`, `SupportedModels`, `GetDirsOrFilesOptions`, `DoomsdayDevice`, `CreateProjectResponse`, `HeaderRepository`, `ArrowCallableNode`, `Regex`, `CoverageRunner`, `DeleteDBParameterGroupCommandInput`, `L1Args`, `ConfigurableCreateInfo`, `RepoCommitPathRange`, `UpdateClusterCommandInput`, `INodePackageJson`, `ReviewItem`, `MongoPagination`, `IPatchRecorder`, `DryadPlayer`, `ProtractorExpectedConditions`, `CommentDto`, `MeetingEvent`, `GroupButton`, `TNodeType`, `ImageConfig`, `PropertyValueRendererContext`, `ScreenshotOptions`, `requests.ListMigrationsRequest`, `AnimationOptions`, `UserService`, `SimpleClass`, `AnimationMixer`, `LiveAtlasWorldDefinition`, `StableTokenInstance`, `BlueprintInfo`, `BitSet`, `DocumentContext`, `VectorStage`, `PartialStepState`, `NotificationCCAPI`, `position`, `JwtService`, `BitWriter`, `GeneratedIdentifierFlags`, `LayerWizard`, `HttpResult`, `xLuceneFieldType`, `S3PersistenceAdapter`, `SlugifyConfig`, `ScryptedNativeId`, `StixObject`, `Ending`, `FeatureDefinition`, `FakeConfiguration`, `SoftwareSourceId`, `ICompletionItem`, `FieldFilterRowData`, `MediaExtended`, `RouteState`, `IWatchExpressionFn`, `Core.Color`, `UnitProps`, `ElementSelector`, `web3ReactInterface`, `AsyncFunction`, `LimitLeafCounter`, `int`, `Vidi`, `CmbInstance`, `CPU6502`, `ScannedElementReference`, `GoToOperation`, `AccountsModel`, `protos.common.CollectionConfigPackage`, `AN`, `CollectorState`, `RouteRecordRaw`, `ValidationQueueItem`, `Cypress.Chainable`, `ExternalSource`, `CoinTransfer`, `Nodes.ASTNode`, `ListPermissionsCommandInput`, `IAttributeData`, `IConnectionParams`, `Prisma.SortOrder`, `VirtualNetworkWaiter`, `CatalogEntry`, `ArrayNode`, `HSD_JObjRoot_Instance`, `SpecificWindowEventListener`, `RepaymentRouterContract`, `cg.Key`, `SizeWithAspect`, `Branch`, `ConversionFunction`, `IDroppableItem`, `HJPlayerConfig`, `GLSL`, `BaseContractMethod`, `FilterExcludingWhere`, `IndicatorObject`, `AsyncResource`, `UiSettingsDefaultsClient`, `InternalCoreStart`, `LinkDownload`, `Adapters`, `ReadyValue`, `AuthStorage`, `WorkboxService`, `TestKernelBackend`, `io.WeightsManifestConfig`, `CmsGroupPlugin`, `NamedTensorsMap`, `AboutComponent`, `HSD_JObj_Instance`, `WorkbenchPageObject`, `Range2d`, `SimulatedPortfolio`, `CaseStyle`, `ArtifactVersion`, `ReplayTick`, `PaymentInformation`, `Lanes`, `ExtendedBlock`, `WildlingCard`, `RuntimePlugin`, `ServerClass`, `ElementRect`, `ConditionalDeviceConfig`, `IXPath`, `Noise`, `ComponentFramework.Dictionary`, `MetaTagModel`, `DenseLayerArgs`, `CdkOption`, `LayerName`, `LegendLocationSettingsProps`, `TemplateLiteralType`, `TimelineKeyframe`, `MathjaxAdaptor`, `PopupState`, `ValueIterator`, `IDecodePackage`, `SignaturePubkeyPair`, `LinkedNodeList`, `DocViewRenderProps`, `OrderedIterable`, `EdgeType`, `WidgetProps`, `ErrorCodeDefinition`, `IContainerProps`, `FetchVideosActions`, `ITaskFolder`, `Apdex`, `window.ShellWindow`, `THREE.Group`, `InPacketBase`, `FailureEventData`, `DealCriteria`, `QueryNameContext`, `QueryOne`, `TypeDescriptor`, `IZosmfTsoResponse`, `JasmineBeforeAfterFn`, `ICacheItem`, `TileTextElements`, `FlashcardFieldName`, `AccountGoogle`, `CustomArea`, `OperationOptions`, `BrowseResult`, `NotificationPressAction`, `TColumnRowPair`, `d.Config`, `Modifiable`, `GeneratedCodeInfo_Annotation`, `RenderArgsDeserialized`, `_Explainer`, `OrganizationPolicy`, `IngameGameState`, `SphereColliderShape`, `IChapter`, `TraceStep`, `DragItem`, `IFBXConnections`, `GitRevisionReference`, `GroupUserEditResponse`, `ConfigLoaderResult`, `FieldError`, `DefaultSDP`, `IVarAD`, `tinyapp.PageOptions`, `BrowserFftSpeechCommandRecognizer`, `TAuthUserInfo`, `CodeMirrorAdapter`, `IFruit`, `ContextEntry`, `ITreeItem`, `MerchantUserEntity`, `EventFragment`, `NodeTracerProvider`, `OpOrData`, `Fix`, `PaneInvalidation`, `SpaceBonus.STEEL`, `StickyDirection`, `ICXOffer`, `PercentLength`, `DescribeChannelModeratorCommandInput`, `CopyDBClusterParameterGroupCommandInput`, `ListTablesCommandInput`, `SteamDeviceReport`, `TransferOptions`, `FabricIdentity`, `TextAreaCommandOrchestrator`, `Integer64`, `SafeBlockService`, `Trigger`, `IViewportInfo`, `MailOptions`, `BiquadFilter`, `ControllerMeta`, `Instance`, `ProjectExport`, `_CollectorCallback2D`, `ReflectionCategory`, `ISiteScriptAction`, `ConditionExpressionDefinitionFunction`, `WorkingService`, `FlatConvectorModel`, `HomeService`, `UnitFormatOptions`, `MappingEvent`, `RouteObject`, `VectorOptions`, `SandboxType`, `VpnConnection`, `SignedStateWithHash`, `AsyncComponent`, `PageScrollInstance`, `TxBroadcastResult`, `HierarchicalFilter`, `MetricTypes`, `SignupDTO`, `ng.IAugmentedJQuery`, `ApiQueryOptions`, `tcp.ConnectionInfo`, `CurriedFunction2`, `NodeKind`, `EngineConfig`, `PreventAny`, `Condition`, `tf.TensorBuffer`, `SetupFn`, `OrganizationTeamsService`, `StructuredTypeSchema`, `FormattingContext`, `TInstruction`, `ArrayPaginationService`, `BindingDef`, `PsbtInputData`, `PopoverContextOptions`, `MDCTopAppBarBaseFoundation`, `PoolCache`, `MonzoPotResponse`, `HttpClient`, `ColumnsContextProps`, `DescribeReplicationConfigurationTemplatesCommandInput`, `OrganizationMembershipProps`, `ABLVariable`, `Granularity`, `TermsIndexPatternColumn`, `SessionStorageCookieOptions`, `Format`, `ContractVerificationInput`, `ScrollHooks`, `Database`, `ProductResult`, `Dayjs`, `NcPage`, `ArgumentCheck`, `theia.CancellationToken`, `Lines`, `PriorityListGroup`, `CloudDirectorConfig`, `dayjs.ConfigType`, `ScmRepository`, `ODataApi`, `Bias`, `IpcPacket`, `IExportData`, `HasId`, `flatbuffers.Builder`, `BungieGroupMember`, `ListTagsResponse`, `UpToDateStatus`, `CpuRegister`, `WriteStorageObjectsRequest`, `TwingEnvironment`, `RushCommandLineParser`, `A`, `StyleProps`, `SeedAndMnemonic`, `LabelValues`, `OnCancelFunc`, `ErrorHandlingResult`, `ECSComponentInterface`, `UpdateOpts`, `FindCharacterMotion`, `GeneratorError`, `ThyTransferItem`, `requests.ListDataSafePrivateEndpointsRequest`, `IMenuItemConfig`, `requests.ListCloudVmClustersRequest`, `Stroke`, `GameFeatureObject`, `GXMaterialHelperGfx`, `ReadModelStoreImpl`, `ManagedFocusTrap`, `BuildStyleUpdate`, `VirtualGroup`, `IAggFuncParams`, `CreateMockOptions`, `ChartHookReturnType`, `LockMode`, `FeedQueryVariables`, `TsickleIssue1009`, `IRequest.Options`, `ModelField`, `AggParamType`, `GetSpaceEnvironmentParams`, `MemberEntity`, `ActionConnector`, `UiStateStorage`, `ListField.Value`, `DQLSyntaxErrorData`, `SaladTheme`, `TokenService`, `MessageViewProps`, `PotentialApiResult`, `IQueueRow`, `providers.BaseProvider`, `TestInfo`, `CompilerJsDocTagInfo`, `ManifestEditor`, `App.services.IPrivateBrowsingService`, `KubernetesObject`, `IResourceEntity`, `CrochetTrait`, `ZipkinSpan`, `DeleteRuleGroupCommandInput`, `ErrorObservable`, `ContractName`, `TelemetryEvent`, `PointData`, `IOSProjectConfig`, `JourneyStage`, `ResponseDataAccessor`, `FunctionEnvelope`, `IEncoder`, `IPatchData`, `AudienceOverviewWidgetOptions`, `T.MachineEvent`, `NavigateToItem`, `SecurityTokenAdapter`, `Paged`, `MangoQuery`, `BuildDecoratorCommand`, `AthleteModel`, `JsonOutput`, `RequiredParams`, `MalFunc`, `EventList`, `NotebookCellData`, `ts.CompletionEntry`, `MethodWriter`, `OutModeDirection`, `PageDoc`, `FoldingContext`, `WifDecodeResult`, `EventFetcher`, `NetworkError`, `NativeTexture`, `HighPrecisionLineMaterial`, `ImageFormat`, `PddlConfiguration`, `RouteMap`, `Interception`, `LoginDto`, `ConnectorProps`, `IMenuContext`, `PostCombatGameState`, `WidgetsRegister`, `ChainState`, `IInviteAcceptInput`, `SimpleTypeFunctionParameter`, `IConfigData`, `GovernObservableGovernor`, `CornerMap`, `FoundOrNot`, `FramesType`, `IObserverLocator`, `DisabledTimeFn`, `ListOfferingsCommandInput`, `MenuState`, `MDCShapeCategory`, `KeyringTrace`, `UnionType`, `ListServicesCommandInput`, `PadId`, `Bignum`, `S3Object`, `SObjectDefinition`, `InterfaceNamespaceTest`, `NuxtApp`, `SchemaMetadata`, `FunnelCorrelation`, `CacheService`, `ItBlock`, `DataSourceTileList`, `CtrLte`, `Breadcrumbs`, `BifrostRemoteUser`, `PhysicsComponent`, `StackTrace`, `GeneratorConfig`, `Comments`, `Eof`, `EventReporter`, `ButtonBaseProps`, `EntitySelectors`, `PromoCarouselOptions`, `LocationDescriptorObject`, `DOMStringMap`, `OpenOrders`, `AppCurrency`, `FunctionCallNode`, `TableEvent`, `PlasmicComponentLoader`, `GetDeliverabilityDashboardOptionsCommandInput`, `three.Geometry`, `ASTPath`, `AnyImportOrRequireStatement`, `PlayerChoice`, `ConfigurationCCReport`, `FileStatusResult`, `InputChart`, `Username`, `XmlDocument`, `DatasourceConfig`, `LinkLabelsViewModelSpec`, `AuditoryDescription`, `DAL.DEVICE_ID_SYSTEM_LEVEL_DETECTOR`, `ColorPickerItem`, `LoginResponse`, `BlobsModel`, `Interface2`, `CounterService`, `LocationMarkModel`, `AWSContext`, `CreateCatDto`, `GraphcisElement`, `cc.AudioClip`, `NodejsFunction`, `LoggerSink`, `EmbeddedRegion`, `ProgressUpdate`, `EncryptedMessageWithNonce`, `StatePathsMap`, `DeploymentTemplateDoc`, `AssignmentExpressionNode`, `WaiterResult`, `IStaticMetadata`, `CacheMap`, `T12`, `turkInformation`, `SegmentBase`, `UnsubscribeCommandInput`, `ParseStream`, `SchemaUnions`, `ClozeRange`, `IStoredTransaction`, `CardModel`, `CopyLongTermRetentionBackupParameters`, `LogLevelType`, `Tspan`, `EventsService`, `ShurikenParticleSystem`, `WebviewPanel`, `LineUp`, `Electron.MessageBoxReturnValue`, `ESLPanel`, `HleFile`, `PuzzleGeometry`, `ISODate`, `PeriodicWave`, `StepGenerator`, `Lint.IOptions`, `requests.ListAvailablePackagesForManagedInstanceRequest`, `MessagesPageStateModel`, `Referenceables`, `FetchHandle`, `ColorDynamicStylePropertyDescriptor`, `UICollectionViewLayoutAttributes`, `PatternEnumProperty`, `EuiSwitchEvent`, `IScope`, `Hover`, `THREE.Material`, `IRadio`, `AnyResponse`, `RunGroupProgress`, `PathObject`, `RedirectPolicy`, `Platforms`, `QueryArgDefinition`, `CreateQueueCommandInput`, `PartyMatchmakerTicket`, `Frequency`, `FileType`, `DecodedOffset`, `view.View`, `OrdenationType`, `MaterialLayer`, `ScreenOptions`, `IRenderableColumn`, `ProjectQuery`, `puppeteer.ScreenshotOptions`, `Animated`, `IFunctionAppWizardContext`, `AbstractElement`, `SaveEntitiesCancel`, `RenderServiceMock`, `AssetDetails`, `ClientContext`, `StakingBuilder`, `GameMap`, `AcceptPaymentRequest`, `Schema`, `DictionaryExpandEntryNode`, `IOrg`, `TreeNodeInterface`, `BSTProcess`, `UnitTypes`, `GotResponse`, `RealFileSystem`, `IListenerDescription`, `CaptureOptions`, `ChromeBadge`, `LazyBundlesRuntimeData`, `HighlighterOptions`, `CompositionEvent`, `IBetaState`, `ChannelMetadataObject`, `TextureCubeMap`, `ts.TranspileOptions`, `Sku`, `TokenIndexedCoinTransferMap`, `IO`, `RecordSourceProxy`, `ListTagsForResourcesCommandInput`, `Sprite`, `EyeProps`, `RtmpOutput`, `WriteBuffer`, `HSD_TETev`, `RawValue`, `vscode.CancellationToken`, `AppUpdater`, `IGatewayMemberXmpp`, `TFLiteWebModelRunnerTensorInfo`, `MarkerData`, `ValueMap`, `IApiSourceResult`, `IRichTextObjectItem`, `RoomBridgeStoreEntry`, `PluralRules`, `FocusOptions`, `TaskRunnerCallback`, `AuthenticateGameCenterRequest`, `SVObject`, `BaseAdapter`, `UpdateAccountSettingsCommandInput`, `PutBucketLifecycleConfigurationCommandInput`, `RequestId`, `IWidget`, `IBrowser`, `VirtualKeyboard`, `TransformResult`, `ActionBinding`, `BaseLayer`, `MeetingAdapter`, `IJsonSchema`, `HammerInputExt`, `ExecutionPathProps`, `LinterConfig`, `CombatZerg`, `PromisifiedStorage`, `GetLaunchConfigurationCommandInput`, `TestScriptErrorMapper`, `PolyfaceData`, `StyleResources`, `IControllerAttributeProvider`, `ESLImage`, `Evaluate`, `RollupChunkResult`, `IStashEntry`, `FileSystemState`, `TopicsData`, `SdkError`, `AssignmentPattern`, `OrderWithContract`, `SqrlExecutionState`, `Game`, `google.maps.Map`, `MemberDescriptor`, `AnswerType`, `CanvasImageSource`, `PackageInstructionsBlock`, `ReduxStoreState`, `BrokerConfig`, `ConnectionConfig`, `IFindWhereQuery`, `AutorestNormalizedConfiguration`, `AccountManager`, `AssessmentData`, `CacheEntryListener`, `UIPreparationStorage`, `Case`, `ZRC2Token`, `Sector`, `ByteBuffer`, `IModelBaseHandler`, `TheWitnessGlobals`, `LayerArgs`, `SquireType`, `RequestState`, `UpdateWebACLCommandInput`, `TypedMessage`, `IntrospectionField`, `Http3PMeenanNode`, `NzThItemInterface`, `ActivityAudience`, `PreReqView`, `PageInstance`, `EnsuredMountedHTMLElement`, `AtomState`, `StravaActivityModel`, `IServiceParams`, `INameAtom`, `ExplorerView`, `SEdge`, `VSTS`, `ReconnectingWebSocket`, `TreeNodeComponent`, `android.view.View`, `SpeechSynthesisVoice`, `ClientEngineType`, `TargomoClient`, `ClassMethod`, `SessionPromise`, `StaffTuning`, `DaffCategoryFilterRequestRangeNumericFactory`, `FilterGroup`, `ActivityPubActor`, `DocumentView`, `DisconnectionEvent`, `IRestClientResponse`, `BackgroundProps`, `TypeObject`, `EyeglassOptions`, `d.ComponentRuntimeMetaCompact`, `Example`, `QuerySuggestionGetFnArgs`, `ParamInfo`, `FieldFormat`, `PutImageCommandInput`, `TimeBucketsInterval`, `FsWatchResults`, `StatusView`, `KVPair`, `AnimationTransform3D`, `TestEvent`, `PersistedData`, `CreateNote`, `DescribeDatasetCommandOutput`, `AxisDependency`, `ExpectStatic`, `sdk.AudioConfig`, `Node.Traversal`, `MatchJoin_MetadataEntry`, `Tree`, `IDesk`, `Merchant`, `JWKStore`, `B7`, `RoleRepository`, `TextOpComponent`, `UnitFactors`, `requests.ListEventsRequest`, `RequestOptions`, `ReplicationConfiguration`, `SavedObjectComparator`, `V1Prometheusrule`, `ConnectionID`, `CreateJobRequest`, `EdgeNode`, `StatesOptionsKey`, `Site`, `J3DFrameCtrl`, `ServiceQuotaExceededException`, `ResourcesFile`, `MochaDone`, `CreateRangeChartParams`, `DisabledDateFn`, `TViewNode`, `Chunk`, `MetadataService`, `AccountRipplePaymentsConfig`, `ConstructorFuncWithSchema`, `DeploymentParametersDoc`, `ResolvedElementMove`, `Repertoire`, `IntersectionC`, `CommandResponse`, `AggregateResponse`, `IWorkflowExecutionDataProcess`, `SqrlRuleSlot`, `PushpinUrl`, `BooleanFilter`, `_Props`, `IfNotExistsContext`, `ForgeModInput`, `CraftTextRun`, `ReactVisTypeOptions`, `TorrentInfo.MediaTags`, `FeatureEdges`, `GitData`, `AuthRequired`, `MousePosition`, `AwsRegion`, `FaastModule`, `backend_util.Activation`, `ListDomainsResponse`, `IMyDpOptions`, `NgSourceFile`, `WalletType`, `IFormField`, `PouchDB`, `ConfigType`, `FileWatcher`, `AggResponseBucket`, `LineCollection`, `RendererType2`, `InputRegisterMaster`, `WebGLResourceRepository`, `Spark`, `IVssRestClientOptions`, `Add`, `BaseInternalProps`, `DescribePendingMaintenanceActionsCommandInput`, `ExtensionProvider`, `StylesConfig`, `FileSystemWatcher`, `C1`, `App.windows.window.IMenu`, `ExtensionPriority`, `InjectCtx`, `ChartsState`, `MediaSlotInfo`, `AdonisRcFile`, `RegistryClient`, `ast.CallNode`, `TypedTransaction`, `ApimService`, `FormattedTransactionType`, `TranspileModuleResults`, `AbstractSession`, `SyntaxNode`, `ReviewerRepository`, `LocationCalculator`, `InsertQueryNode`, `StoredState`, `BoardTheme`, `BFS_Config`, `ScopeFn`, `AST.Expression`, `SeedFile`, `C4`, `G2`, `GasParameters`, `BaseClass`, `VuexModuleOptions`, `apid.CreateNewRecordedOption`, `MdcSnackbarContainer`, `TileFeatureData`, `FlatScancode`, `VFileMessage`, `IIndex`, `SelectionChange`, `angular.IRootScopeService`, `NodeFetchHttpClient`, `Fp`, `providers.TransactionRequest`, `FunctionDeclaration`, `ReactiveChartDispatchProps`, `BlockhashAndFeeCalculator`, `LhcDataService`, `CommonTerminalEnum`, `PolarData`, `MockDocument`, `DetectorConfiguration`, `C8`, `X12QueryEngine`, `SimpleUnaryImpl`, `AssignableDisplayObjectProperties`, `PluginsStatusService`, `CollectionReturnValue`, `PortalService`, `ICommonCodeEditor`, `CommandFlag`, `AttributeIds`, `ModalFlavor`, `IMetricAlarmDimension`, `IResourceItems`, `pw.Frame`, `AggParam`, `ReactLike`, `RefactoringsByFilePath`, `IDataIO`, `MailService`, `MockFixture`, `Installation`, `BytesLike`, `MonzoBalanceResponse`, `ListPipelinesCommandInput`, `DiscoverSidebarProps`, `Series`, `CompositionContext`, `Canvas`, `Spinnies`, `ZoneChangeWhisperModel`, `d.PixelMatchInput`, `MemAttribute`, `AirUnpacker`, `PersistItem`, `AsExpression`, `FilePathPosition`, `PrimitiveValueExpression`, `ListTagsCommandInput`, `BackgroundBlurVideoFrameProcessorObserver`, `JSESheet`, `requests.ListWaasPoliciesRequest`, `IHeaders`, `Int32Value`, `HistoryRecord`, `HealthStatus`, `StackFn`, `ResultState`, `LoadingIndicatorProps`, `FetchHttpClient`, `CalendarEventsCache`, `ServiceClass`, `WebXRSystem`, `ScrollDispatcher`, `NodeVo`, `IChannel`, `DeleteBotVersionCommandInput`, `PutLoggingConfigurationCommandInput`, `ClClient`, `StringPublicKey`, `ModalController`, `Shader`, `TableColumns`, `Sub`, `IActionArgs`, `ResourceService`, `playwright.Page`, `ICDN`, `LocalizedError`, `GraphQLNonNull`, `ResourceAlreadyExistsException`, `OrderedAsyncIterableBaseX`, `Core.Position`, `ComponentLookupSpec`, `EntityTree`, `AuthStore`, `Join`, `napa.zone.Zone`, `XUL.tabBrowser`, `dataStructures.BufferMap`, `PriorityCollectionEntry`, `IUploadOptions`, `WidgetDescription`, `DateTimeNode`, `OffscreenCanvasRenderingContext2D`, `DogRepresentation`, `IWriter`, `ExpandGroupingPanelCellFn`, `CreateOpts`, `soundEffectInterface`, `MdxTexture`, `HelpRequest`, `NgbModalRef`, `CardImage`, `TopUpProvider.RAMP`, `ChildDatabase`, `QueryOpt`, `Param`, `DevicesStore`, `TooManyRequestsException`, `TaskObserver`, `requests.ListDedicatedVmHostInstancesRequest`, `ScalesCache`, `AddRoleToDBClusterCommandInput`, `RequesterAuthorizerWithAirnode`, `MatchRecord`, `AlbumType`, `MessageRequester`, `BuildHandler`, `requests.ListIdpGroupMappingsRequest`, `RollupBlockSubmitter`, `NumberTuple`, `ISession`, `BrowserView`, `KhouryProfCourse`, `GridDimensions`, `UsedNames`, `DescribeUserRequest`, `DescribeTagsRequest`, `UserEmail`, `StrikePrices`, `Models.GameState`, `STDataSourceResult`, `VirtualMachineScaleSet`, `FlatQueryOrderMap`, `PopoverTargetProps`, `JSDocPropertyTag`, `IPipelineOptions`, `DetachedRouteHandle`, `QueueClient`, `MessageToMain`, `SqlPart`, `Buttons`, `TutorialContext`, `LookupStrategy`, `InventoryItem`, `CallbackError`, `AnimationEasing`, `ServiceProperties`, `CIFilter`, `VoiceFocusAudioWorkletNode`, `LinesIterator`, `ts.TextSpan`, `RadixTree`, `PageRoute`, `EndpointBuilder`, `EnumShape`, `Rep`, `JupyterMessage`, `OrganizationDepartmentService`, `VueI18n`, `bluebird`, `PermissionResolvable`, `ArgumentNode`, `IsZeroBalanceFn`, `InvalidPaginationTokenException`, `ComputeManagementClient`, `MethodDescriptor`, `PIXI.Application`, `ProcessApproachEnum`, `TundraBot`, `AttrMutatorConfig`, `IDateGrouper`, `TE.TaskEither`, `ResponsePromise`, `FeatureCatalogueEntry`, `FileExplorerState`, `Angulartics2`, `StaticCardProperties`, `Partial`, `EffectRenderContext`, `IEditorPosition`, `ImageEditorTool`, `OutboundMessage`, `SceneComponent`, `PerfKeeper`, `RequestPausedEvent`, `AggParams`, `SimpleScalarPropertiesCommandInput`, `CommittedFile`, `KeyInfo`, `DataResolver`, `WebMscore`, `MatchPath`, `SelectOutputDir`, `HTMLFormatConfiguration`, `ServerlessResourceConfig`, `TriangleCandidate`, `Lint.WalkContext`, `StartDependencies`, `ASVariable`, `Fork`, `TestRequest`, `MediatorMapper`, `ITag`, `FileDetails`, `DropoutMasks`, `ColumnFilterDescriptor`, `BN`, `AppExtensionService`, `Argument`, `ClientRepresentation`, `ItemTemplate`, `RelatedViews`, `ProblemData`, `CssBlockError`, `MockAddressBookInstance`, `IncomingRegistry`, `Beneficiary`, `InitialState`, `JsonPayload`, `WorkRequestOperationType`, `CreateRoleDto`, `IEventCategory`, `InterfaceImplementation`, `DeleteContext`, `IInitiativeModel`, `GridSprite`, `AlainConfig`, `CatCommonParams`, `DstatementContext`, `Image`, `FormatGraph`, `FieldParamEditorProps`, `MetricDescriptor`, `DOMTokenList`, `ethers.BigNumber`, `Semver`, `Blog`, `Immutable.List`, `EventMapper`, `pxtc.CompileOptions`, `AbstractProject`, `BreakpointFnParam`, `DaffBestSellersReducerState`, `FileSystemTrap`, `NormalizedField`, `EventIded`, `AccountsStore`, `ToggleButtonProps`, `ICardFactory`, `MaxSizeStringBuilder`, `UninterpretedOption_NamePart`, `RecordOptions`, `GetPointTransformerFn`, `IEnumerator`, `QualifierSpec`, `Dirent`, `JestExt`, `MethodCall`, `VulnerabilityReport`, `DirectoryDiffResults`, `SearchSessionDependencies`, `ExtendedGroupElement`, `GenericTwoValuesAndChildren`, `AndroidPerson`, `AsyncIterableExt`, `HostContext`, `DeployedApplication`, `jest.MockedFunction`, `Services.Configuration`, `TocStepItem`, `ModifyDBClusterSnapshotAttributeCommandInput`, `ResourceStatus`, `Ordering`, `TurnTransport`, `HttpPayloadWithStructureCommandInput`, `DetailedCertificate`, `FleetMetricDefinition`, `ContentChange`, `UIAction`, `NowRequest`, `EC.KeyPair`, `StackParameterInfo`, `Ulimit`, `InitializeHandler`, `CCIndicatorSensor`, `InteractionMode`, `MetaInfoDef`, `TextChar`, `Brush`, `ChangeSetType`, `ZoomDestination`, `HsAddDataVectorService`, `FloatTypedArrayConstructor`, `EngineResults.DiagnoseMigrationHistoryOutput`, `FontFeature`, `LoaderAction`, `QueryOrdering`, `PluginFactory`, `ClientConfiguration`, `CreateViewOptions`, `Repositoryish`, `IPC.IFile`, `IMock`, `ComponentTemplate`, `ChemicalDoseState`, `ExpiryMap`, `RuleSetRule`, `WatermarkedType`, `ErrorHandlingService`, `UnsubscribeFn`, `PartyJoin`, `BytesReader`, `YDomain`, `PyrightFileSystem`, `PredictableStepDefinition`, `ListSendersRequest`, `ChildData`, `BusyService`, `PartialVisState`, `TypeProto`, `TextType.StyleAttributes`, `Tiles`, `TileTexSet`, `GPUTexture`, `FakeChain`, `ResFont`, `SdkFunctionWrapper`, `PolygonCollider`, `ListTagsCommandOutput`, `InspectResult`, `ActionParamsType`, `SegmentItem`, `StreamHandler`, `ITracerBenchTraceResult`, `MkFuncHookState`, `UpdateProjectInput`, `LightChannelControl`, `Resolvers`, `LayersModel`, `SqlTuningAdvisorTaskSummaryReportObjectStatFindingSummary`, `AnyObj`, `ExpressionStepDefinition`, `JsonLdDocumentProcessingResult`, `TestProduct`, `CollisionZone`, `ApplyWorkspaceEditParams`, `ListenOptions`, `AngularFireList`, `TInsertAdjacentPositions`, `IBackgroundImageStyles`, `Desktop`, `OrderItem`, `ARMUrlParser`, `PutIntegrationCommandInput`, `IID3v2header`, `IdleState`, `DateInputObject`, `MetricAggTypeConfig`, `NodeDefinition`, `DrawerInitialState`, `Delegate`, `CommandBuffer`, `ColorScheme`, `RobotApiResponseMeta`, `MP.Version`, `IConnectionProfile`, `ResourcePendingMaintenanceActions`, `StringOrNull`, `ILineDiv`, `IqSelect2Item`, `IContextualMenuProps`, `IVarXYValue`, `requests.ListInternetGatewaysRequest`, `Shift`, `ContentBlockNode`, `vscode.TerminalDimensions`, `TIcu`, `NVM500JSON`, `SpheroMini`, `ITokenRefresher`, `GfxProgramP_GL`, `IViewData`, `NavigationPublicPluginStart`, `SolutionBuilderState`, `HoverResults`, `ClassSchema`, `AuthenticationMethodInfo`, `TreeDir`, `HomePublicPlugin`, `HandlerFunction`, `ShortUrlRecord`, `NormalizedEnvironment`, `SharedCLI`, `ConditionsArray`, `NativePath`, `ElementsTable`, `Source`, `SharedTestDef`, `RestClientOptions`, `OAuthError`, `IEnumerable`, `RemoteController`, `CommentNotification`, `UpdateComponentCommandInput`, `LocalizeParser`, `Highland.Stream`, `RuleFailure`, `PortBinding`, `ISample`, `RtkResourceInfo`, `RuntimeTable`, `Vertex`, `Phaser.Math.Vector2`, `ListModelsRequest`, `Electron.OpenDialogOptions`, `WizardTestComponent`, `QueryService`, `SessionChannel`, `SkipBottomButtonProps`, `m.Vnode`, `IRawHealthStateCount`, `ISetLike`, `HasSelectorNodes`, `VirgilPrivateKey`, `PublicIPAddress`, `DropTargetConnector`, `requests.ListInstanceAgentCommandExecutionsRequest`, `helper.PageOptions`, `VaultBackupType`, `SavedObjectsImportError`, `RenderAPI`, `EditableEllipse`, `ThemeTag`, `GetPrismaClientConfig`, `IWorkerContext`, `CategoryCollectionParseContextStub`, `UseConnectResult`, `ContextContributorFactory`, `SpeculativeTypeTracker`, `EntryProcessor`, `IMessageOptions`, `TreeItemComponent`, `OrderbookResponse`, `EditableRectangle`, `core.ScalarOutSpread`, `OrderService`, `RxFormControl`, `DropIdentifier`, `IDataItem`, `IDialogContext`, `InternalPlotConfigObject`, `DeleteQueueCommandInput`, `UnoGenerator`, `lf.schema.TableBuilder`, `requests.ListLoadBalancerHealthsRequest`, `DSOChangeAnalyzer`, `MutableCategorizedStructProperty`, `DefinitionLocations`, `LoadCache`, `mjComponent`, `Globe`, `TSize`, `BasicGraphOnEdges`, `SerializedDatatable`, `Machine`, `ODataEntitySet`, `PutAccountsRequestMessage`, `requests.ListResolversRequest`, `ReadResult`, `PreferenceSchema`, `RunProps`, `Math2D.UvBox`, `LocalUser`, `LexerInterpreter`, `EscapeableMethod`, `TemplatePart`, `Ornaments`, `TokenIterator`, `IObjectOf`, `HTMLLabelElement`, `TagObject`, `AtlasManager`, `TreeDirItem`, `SingleSampledTexture`, `EzBackend`, `InterfaceInternal`, `ListModelDeploymentsRequest`, `Argv`, `CLM.ScoredAction`, `Promotion`, `AnalyticsService`, `ExternalAttributionSources`, `PolicyType`, `ResolveIdResult`, `Register`, `MerkleIntervalTreeNode`, `ServiceAnomalyTimeseries`, `PhysicalLayout`, `Mdast.Link`, `PodDataPoint`, `IData`, `PiPropertyInstance`, `UiSettingsClient`, `SearchExpressionGroup`, `CreateAggConfigParams`, `N2`, `HighContrastModeDetector`, `builder.IEvent`, `ResultPoint`, `Bind`, `ProgramCounter`, `TransformerPayload`, `TrackingData`, `Bass`, `OutgoingMessage`, `Constructor`, `AdminActions`, `RoleOption`, `NoteItem`, `CompressorOptions`, `LoaderData`, `FacepaintStyleSheetObject`, `DirItem`, `InstancesState`, `StorageAdapter`, `ScalarTypeComposer`, `StandardProps`, `MemoExoticComponent`, `TransformLike`, `RadListView`, `PolymerElement`, `NgWalker`, `SearchInWorkspaceResult`, `ContractProgram`, `NativeStackScreenProps`, `SettingsDataProvider`, `PointerPressAction`, `VRMCurveMapper`, `ParenthesizedExpression`, `CsvFormatterStream`, `BoolTerm`, `CraftBlock`, `EmitterManager`, `PartialConfig`, `RealtimeUsersWidgetData`, `PanelConfig`, `IStyleAttr`, `BlockService`, `JsonResult`, `WindowService`, `React.PropsWithChildren`, `TestOptions`, `DomainDropSet`, `VisualizePluginSetupDependencies`, `ConfirmedTransaction`, `ArrayType1D`, `Internal`, `StateHandler`, `WorkflowActivateMode`, `IFormat`, `WhiteBalance`, `IRouterConfig`, `CompressedEmojiData`, `IFormSection`, `StacksPublicKey`, `SelectionService`, `InputObjectTypeDefinitionNode`, `BasePlayer`, `SqlQuery`, `DataModel.RowRegion`, `GUIDestination`, `ProtocolNotificationType0`, `NoteForActivitySetup`, `SiteLicenses`, `ListEnvironmentTemplatesCommandInput`, `IDataFilterValue`, `WhitelistType`, `FullyQualifiedScope`, `StatusService`, `Graph`, `IMaterial`, `FirebaseListObservable`, `StubProvider`, `IsAssign`, `DescribeAccountCommandInput`, `NinjaItemInfo`, `StatementCST`, `FeaturedSessions`, `PlaywrightTestConfig`, `TaskPriority`, `VersionHistoryDataService`, `UseMutationOptions`, `Preview`, `Expectation`, `PrerenderUrlResults`, `FirewallRule`, `GetPromiseInvocationContext`, `DescribeEventsCommandInput`, `ThyTransferDragEvent`, `UrlDrilldown`, `types.UiState`, `NextPageContext`, `ListProfilingGroupsCommandInput`, `ts.DiagnosticCategory`, `LayerEdge`, `QueryCompleteContext`, `size_t`, `RelationsService`, `CompilerOptionsValue`, `IComplexTypeEx`, `EntityWithGroupType`, `DecryptedUserMessage`, `ListFindingsResponse`, `Square`, `OpenAPISchema`, `OperatorFormat`, `CaptionElementProps`, `OptionalRef`, `IBindingTemplate`, `DateSchema`, `QueryParamConfig`, `Routes`, `C2dRenderTexture`, `StandardSchemeParams`, `RegionData`, `RegisteredActionRunner`, `KeyPathList`, `AssetId`, `KernelParams`, `MediaTrackSettings`, `SceneNodeBuilder`, `TraitMap`, `CommandSetting`, `k`, `RegisteredPlugin`, `LookUp`, `MetronomeBeam`, `SimpleRNN`, `TabbedRangeFilterParams`, `Probe`, `freedom.Social.ClientState`, `AuthDispatch`, `DOMHandlerScope`, `ContractMetadata`, `PointCloudMaterial`, `PlacementTypes`, `schema.Context`, `NotificationBarService`, `AppearanceCharacteristics`, `LoginPayload`, `UpSetProps`, `JsonLocations`, `ListConnectionsCommandInput`, `RLAN`, `ArrowHelper`, `TestFunction`, `ConnectionMessage`, `IAmazonS3Credentials`, `CmdType`, `JobAgent`, `DescribeLoadBalancersCommandInput`, `Preprocessor`, `Path1`, `PatternRecognizer`, `PersistedStatePath`, `DragDropProviderCore`, `FieldValidationResult`, `ProjectInput`, `ChainableElement`, `EditorWidget`, `AddFriendsRequest`, `HttpServer`, `DomainData`, `ActivityDefinition`, `StreamDescription`, `ITextureInfo`, `DocumentManager`, `FieldToMatch`, `GetImportJobCommandInput`, `ExprVisState`, `BlockChainUser`, `HTMLDetailsElement`, `SeriesChanges`, `InternalGroup`, `MockClientFactory`, `WasmResultValues`, `SecurityRequestHandlerContext`, `ItemOptions`, `ApiErrorReporter`, `HashParams`, `IRpoToken`, `TypeKind`, `HistoryNodeEvent`, `TileInfo`, `NbToastrService`, `GltfNode`, `Point`, `ConflictType`, `GestureEventData`, `CreateTokenAccount`, `android.content.Intent`, `EntityColumnDef`, `GetUserData`, `HsDialogItem`, `IAccordionItemContextProps`, `OrganizationSet`, `AzureConfigs`, `GetSymbolAccessibilityDiagnostic`, `OnEvent`, `PriceOracle`, `UntagResourceInput`, `PrismaClient`, `ICountryModel`, `ReaderConfig`, `Group`, `ActionFilter`, `ParametersPositionContext`, `InternalProps`, `CustomOkResponse`, `JsonIdentityInfoOptions`, `Events.postcollision`, `SaveEntities`, `ts.WriteFileCallback`, `MethodDetails`, `CamelElement`, `IModelConnection`, `ControlOptions`, `IPivotItemProps`, `NodesInfo`, `ConnectedWallet`, `IFormInput`, `ParameterInformation`, `ModelRenderer`, `SamlRegisteredService`, `AngularFireFunctions`, `W1`, `GunGraphAdapter`, `program.Command`, `RedPiece`, `DataOption`, `DeleteNamespaceCommandInput`, `GossipMemoryStore`, `ProjectLabelInfo`, `Viewport_t`, `BaseExecutor`, `StackDescriptor`, `ProjectExtentsClipDecoration`, `LicenseSubs`, `CacheKeys`, `FullRequestParams`, `Inhibitor`, `DescribeDBClusterSnapshotAttributesCommandInput`, `BoneDesc`, `Lru`, `PutResourcePolicyRequest`, `ViteDevServer`, `RequireFields`, `d.BundleModule`, `JoinDescriptor`, `BatchCheckLayerAvailabilityCommandInput`, `MockCachedRule`, `ErrorState`, `ShareMap`, `InterfaceDeclarationStructure`, `StateChange`, `TStylingContext`, `Declaration`, `UseSavedQueriesReturn`, `IRootState`, `GraphService`, `IpcRendererEvent`, `TypesStart`, `CombatantState`, `IReader`, `Knuckle`, `DataReader`, `TokenState`, `BsModalRef`, `ConnectionSetting`, `MatOpM`, `ShaderRegisterElement`, `TSObj`, `ICXOrder`, `TouchMouseEvent`, `ODataUri`, `cdk.App`, `ModelTypes`, `UInt32`, `TimePickerState`, `SimpleToastCreator`, `ApolloTestingController`, `AuthProps`, `Bean`, `MockRouter`, `PDFPageEmbedder`, `RegSuitConfiguration`, `CloudWatchDimensionConfiguration`, `IOrganizationSprint`, `DataSnapshot`, `IComponentName`, `IAmazonClassicLoadBalancerUpsertCommand`, `XRSession`, `RestorePoint`, `ISelectionId`, `MetadataProperty`, `StageRuntimeContext`, `GanttSettings`, `SubscriptionTracker`, `UpdateGroupRequest`, `Uri`, `MediaProviderConfig`, `MessageProps`, `WebSettings`, `PropertiesField`, `TreeNode2`, `Truncate`, `ScannerState`, `CreateCrossAppClickHandlerOptions`, `WechatSettingService`, `HeaderComponent`, `requests.ListTsigKeysRequest`, `LayoutDefaultHeaderItemComponent`, `DeleteExpression`, `CommonSelectedNode`, `APIClient`, `IEventFunction`, `GraphIIP`, `TestHost`, `CategoryCollectionParserType`, `TreeviewComponent`, `PlotArea`, `IAppContext`, `SessionConnection`, `SalesLayoutState`, `HttpRequestOptions`, `GlobalVarsService`, `BleepsGenerics`, `NavigatableWidget`, `EntityNameOrEntityNameExpression`, `PerspectiveCamera`, `ChartJSNodeCanvas`, `ServiceNowActionConnector`, `IAuthStrategy`, `WorkerPoolResource`, `HealEvent`, `SupCore.Data.Schema`, `PedalTuning`, `fromAuthActions.Login`, `HsEventBusService`, `SemanticTokensParams`, `SolutionToApiAnalysis`, `AngularPackage`, `ArrayBuffer`, `GfxrGraphBuilder`, `BadRequestException`, `ModuleNameNode`, `PortalType`, `StreamResponse`, `SetContextLink`, `MemoryPages`, `BookmarkChange`, `CreateStackCommandInput`, `FileScan`, `IFiber`, `ICondition`, `CronProcessTable`, `CanaryExecutionRequest`, `TransactionSkeletonType`, `MovingDirection`, `SceneObjHolder`, `ComponentFileItem`, `TestType`, `BitstreamDataService`, `GenerateFn`, `IGitApi`, `HistoryResponse`, `TextPathGeometry`, `MaxPooling1D`, `ExtractDto`, `NumberFormat`, `APIConfig`, `Slugifier`, `protocol.Message`, `DestinationAuthToken`, `CoreContext`, `MappedPosition`, `ODataModelField`, `Ivl`, `TestFunctionImportEdmReturnTypeCollectionParameters`, `AssetPropertyTimestamp`, `AirtableBase`, `AccountImplement`, `Combinator`, `UInt160`, `TypeDetails`, `HeadingProps`, `DataSeriesDatum`, `PathRef`, `ICardEpisode`, `ReadonlyArray`, `core.BTCAccountPath`, `IScreen`, `Survey.Operand`, `Store`, `FIRStorageTaskSnapshot`, `SignOptions`, `ChartWidget`, `IncludedBlock`, `OpType`, `IContext`, `CustomNestedProps`, `PubArt`, `ModuleDefinition`, `PromiseSocket`, `TranslationSettings`, `HttpStatusCodes`, `SavedVisualizationsLoader`, `SFUISchemaItem`, `PassportStatic`, `DeviceService`, `ProductService`, `TextAreaComponent`, `TNodeReturnValue`, `LoggingConfigType`, `MyEvent`, `MetalsTreeViewNode`, `ResolvablePromise`, `EvmContext`, `PaymentInfo`, `InputSettings`, `RollupWarning`, `FkQuadTree`, `TestFolder`, `Filesystem`, `SnackbarKey`, `DescribeEndpointsCommandInput`, `ResourceArguments`, `CaretOptions`, `StaffLayout`, `DecorationFileMap`, `ECharts`, `IConnectionFormState`, `Evt`, `LinkObject`, `UsbDevice`, `IsometricPoint`, `SecurityReport`, `SpeechServiceConfig`, `NotificationTemplateEntity`, `IDrawOption`, `LikeEntity`, `InspectorViewDescription`, `TemplRef`, `ComponentService`, `AccessKey`, `SelectionModel.Selection`, `AVRIOPort`, `yargs.Arguments`, `DataViewCustom`, `ProjectColumn`, `SavedObjectsImportFailure`, `StorageImpl`, `SMTMaskConstruct`, `TryStatement`, `IPlDocObject`, `PhraseFilterValue`, `Prisma.Sql`, `UsersActionType`, `Matrix`, `ITooltipProps`, `PlaybackSettings`, `AnyBody`, `IFeature`, `InternalHandler`, `Servient`, `TemplateSummary`, `NibbleDisk`, `DataItem`, `Progression`, `ExtendedType`, `IQueryBuilderPart`, `oke.ContainerEngineClient`, `SdkSignalFrame`, `DebugProtocol.OutputEvent`, `DebugProtocol.Source`, `ScopedLabel`, `DecoratorConfig`, `ICategory`, `NationalTeam`, `TokenSource`, `Cypress.Response`, `requests.ListSecretVersionsRequest`, `AggTypeFieldFilter`, `FieldToValueMap`, `DelayNode`, `UsageSummary`, `MonitoredItem`, `DeleteImageCommandInput`, `SavedObjectUnsanitizedDoc`, `ScullyRoute`, `CalculateBoundsFn`, `ActivePoint`, `SqlObject`, `SymbolKey`, `AuxChannelData`, `BundleModuleOutput`, `DescribeCodeBindingCommandInput`, `VdmFunctionImport`, `TupleAssignmentContext`, `DocumentDelta`, `CreateTokenCommandInput`, `ClanStateService`, `IAnalysisState`, `NumberDraggerSeg`, `JsonFormsState`, `AsyncLocalStorage`, `StateProvider`, `SolanaNetwork`, `BeatUnitDot`, `SRT0_MatData`, `vscode.ConfigurationChangeEvent`, `d.OptimizeCssInput`, `U`, `VersionStage`, `IBlockchain`, `FolderItem`, `ScrollInfo`, `UniversalRouterSync`, `TT.Tutorial`, `Stream`, `TupletDot`, `TestContracts`, `ReferenceContext`, `IFileTreeItem`, `Field_Ordinal`, `ParticipantSubscriber`, `H264RtpPayload`, `AuthModeChanged`, `CompilerAssetDir`, `ApplicationRepository`, `TextureFetcher`, `DmChannelDTO`, `jasmine.Spy`, `IServerOptions`, `UserFunctionNamespaceDefinition`, `JSON_PayloadInMask`, `PrismaClientRustErrorArgs`, `StartQueryCommandInput`, `IDBAccessQueryParams`, `DiscordBridgeConfig`, `PerformListFilesArgs`, `GfxQueryPoolP_GL`, `GoStoneGroup`, `ResolvedConnection`, `ListConfigurationsCommandInput`, `AndroidChannel`, `SchemaToArbitrary`, `GherkinException`, `AllFile`, `VideoModel`, `child_process.ChildProcess`, `BaseVisTypeOptions`, `VAStepData`, `NotifyPlacement`, `ButtonColor`, `TResult`, `UserFunctionDefinition`, `PairSide`, `TemplateTag`, `ChartSpecPage`, `HttpEvent`, `SubjectDataSetColumn`, `WorkRequestWaiter`, `MetricType`, `GetMasterAccountCommandInput`, `CreateJobCommandInput`, `InferenceInfo`, `UserInput`, `TooltipValue`, `MenuSection`, `ApisTreeItem`, `LocalOptionsMap`, `MockBaseElement`, `RedirectResult`, `Ecies`, `WebAppRuntimeSettings`, `ts.LabeledStatement`, `AlterTableBuilder`, `ZipResource`, `NameType`, `ExpirableKeyV1`, `IBaseView`, `ReactChild`, `eui.Image`, `MarkSizeOptions`, `Fill`, `Mute`, `BlockMap`, `SelectorCore`, `MobileRpcChunks`, `TypingIndicatorReceivedEvent`, `CancellablePromise`, `SerializeErrors`, `marked.Renderer`, `MockCall`, `MnemonicLanguages`, `AbsoluteDirPath`, `ScaleMap`, `UpdateUserResponse`, `IterableChanges`, `ChannelFactoryRegistry`, `Perm`, `V1DaemonSet`, `Calendar_Contracts.IEventSource`, `SymbolAccessibilityDiagnostic`, `SourceControlResourceState`, `JsonDocsSlot`, `WalletError`, `TestStep`, `ElmExpr`, `ShorthandFieldMapObject`, `AnalyticSegment`, `Q.Deferred`, `SourceFileInfo`, `IPropertyValueDescriptor`, `YBasicSeriesSpec`, `WorkItemTypeUI`, `User.Type`, `GuideType`, `ComponentTestingConstructor`, `WordStorage`, `ViewUpdate`, `Toolbox`, `CrochetType`, `BTreeNode`, `DocView`, `OrgInfo`, `GetText`, `ModuleResolutionKind`, `estree.Node`, `EncodeOptions`, `EventAccumulator`, `FileOutput`, `PointCloudHit`, `ConversationState`, `MilestoneActivity`, `Polyface`, `LoggerFormat`, `AuthenticateFacebookInstantGameRequest`, `ValidatedPurchase`, `Persistor`, `FuncMode`, `CachedTileLayer`, `BreadcrumbItemType`, `PathReference`, `WebSocketClient`, `ApplicationStart`, `ShaderParam`, `TestScriptResult`, `FileDefinition`, `IContextMenuItem`, `Beam`, `BranchDataCollection`, `DefaultDataService`, `ResponseError`, `TaskFn`, `InteriorNode`, `TExtra`, `New`, `TernaryNode`, `IPosition`, `SpecializedFunctionTypes`, `LabelDefinitionJSON`, `tmrm.TaskMockRunner`, `ChatItemSet`, `AssignmentCopyStep`, `AppRedux`, `KeyRingStore`, `DataFileType`, `SfdxCommandDefinition`, `CustomDecorator`, `VisualizationChartProps`, `EntriesArray`, `HsdsEntity`, `ModuleDatafeed`, `GraphEdge`, `GetDeprecationsContext`, `ApexTestNode`, `Predicate`, `TemplateHandlers`, `KeyofC`, `HeaderColumnChainRow`, `NextApiRes`, `WalkerDown`, `Z64SkeletonHeader`, `VersionNumbers`, `RoverInitialState`, `DeeplyMocked`, `CdkDragEnter`, `requests.ListConnectHarnessesRequest`, `IEffect`, `LabelValue`, `PSIDataType`, `Sema`, `QueryFormColumn`, `PackageId`, `DSlash`, `YConfig`, `Bytecode`, `EnvValue`, `Dependency`, `BuiltAction`, `IntrospectionEngineOptions`, `NavigatorParams`, `MemoryArray`, `GetUserSettingsCommandInput`, `ConfirmationService`, `PluginEditorProps`, `SegmentRef`, `CandleData`, `RequiredFieldError`, `ApiRoute`, `StyledButtonProps`, `DescribeEventsMessage`, `ItemType`, `ThreadID`, `RevisionValueCache`, `CppCbToNew`, `DeliveryDetails`, `BreakpointObserver`, `QueryFormData`, `IndexedDB`, `InputInfo`, `TicTacToeGameState`, `CollisionContact`, `InternalProvider`, `PluginPageContext`, `Node_Interface`, `ListApplicationsCommandOutput`, `TransactionReceiptTruffle`, `ReplacementRule`, `ParamsOptions`, `EntityUpdate`, `TransferItemOption`, `FastifyError`, `IStorageLayer`, `EmptyInputAndEmptyOutputCommandInput`, `Fanduel`, `InterfaceWithThis`, `DoubleValue`, `OpenApi.Document`, `VectorList`, `d.BuildSourceGraph`, `CancellationStrategy`, `InputPort`, `CommandCreatorError`, `DirectBuy`, `ObjectModel`, `IHawkularAlertRouterManager`, `SimpleNode`, `Heap`, `DaLayoutService`, `DnsValidatedCertificate`, `ILinkProps`, `core.ETHAccountPath`, `KibanaResponse`, `SpecRoleCapabilities`, `IAbortAblePromise`, `LoginAccountsRequestMessage`, `OrganizationMemberType`, `ProgressConfig`, `IBalanceValue`, `MutationObserverInit`, `MeshSprite3D`, `StartPipelineExecutionCommandInput`, `IDistributionDelta`, `JsonFragment`, `ConnectionInformations`, `FortuneOptions`, `BufferId`, `Lambda`, `ComponentClass`, `ODataSingletonResource`, `IValidationResponse`, `Chip`, `TableClient`, `ExportDefaultDeclaration`, `RemoteInfo`, `IMessageValidator`, `FormReturn`, `GenerationStatus`, `GetAppDefinitionParams`, `DiffFile`, `FeatureID`, `ReactComponent`, `Airline`, `MonacoEditorService`, `PouchDBFileSystem`, `NotFound`, `Array4`, `FormlyDesignerConfig`, `RouteValidator`, `MessageError`, `ListPresetsCommandInput`, `QueryWithHelpers`, `Matrix2`, `At`, `CouncilData`, `OnDemandPageScanResult`, `GunGraphData`, `ProductSearchParams`, `core.ITenantManager`, `SubscribableEditionComboboxItemDto`, `TSIf`, `IAbstractControl`, `SerializableMap`, `InternalDatasource`, `CodeMirror.Editor`, `RangeFilterParams`, `IPageData`, `Hook`, `GetDeploymentsCommandInput`, `NelderMeadPointArray`, `webpack.Configuration`, `SpriteData`, `ParsedMessage`, `WriterContext`, `BufferView`, `AllAccessorDeclarations`, `RectAnnotationSpec`, `ProjectorPerformanceLogger`, `TextProperties`, `NodeRef`, `IndexPatternSelectProps`, `BehaviorObservable`, `NgbPanelChangeEvent`, `BlinnPhongMaterial`, `RangeSelectorOptions`, `AbstractDistanceCalculator`, `AlertData`, `ModelProps`, `ICommon`, `QueryPayload`, `LaunchTemplateOverrides`, `CellSelection`, `ConceptConstraint`, `Setdown`, `IShellMessage`, `Composable`, `InstantiableRule`, `MaterialSet`, `ReservedIP`, `CommandEvent`, `Swizzle`, `RefService`, `FileDeclaration`, `_ZonePrivate`, `GeoJSON`, `AreaChartOptions`, `AlertAction`, `SqlTaggedTemplate`, `QuestionModel`, `DappRequest`, `MessagePayload`, `AuthenticationName`, `IJumpPosition`, `ChannelType`, `FieldVisitor`, `NzFormatEmitEvent`, `AttributionInfo`, `ServiceImplementations`, `NSData`, `MEMBER_FLAGS`, `JsonAstNode`, `DeleteSnapshotCommandInput`, `GameBase`, `Pagerank`, `DiagnosticMessage`, `Other`, `TEAttr`, `TSBreak`, `CacheItem`, `HistoryTreeItem`, `IRequestInfo`, `AnalysisResults`, `SavedObjectsMigrationVersion`, `CliqueVote`, `Project.ID`, `DhcpOption`, `ActivatedRouteSnapshot`, `CommandExecution`, `AggConfigSerialized`, `VideoDescription`, `SchemaValidatorFactory`, `DeviceType`, `StateIO`, `IOptimizeOptions`, `TypeScriptEmitter`, `ITransUnit`, `ConfigTypes.CFWorkers`, `BucketMetadataWithThreads`, `ConceptDb`, `ContractState`, `FSFile`, `ProfileStateModel`, `FaunaIndexOptions`, `SimpleReloaderPlugin`, `Notes_Contracts.Note`, `IReduxAction`, `EnvironmentOptions`, `RelationExt`, `ValuePaddingProvider`, `TDiscord.Guild`, `AuthorizeParamsDto`, `BookSavedObjectAttributes`, `XmlStateConsumer`, `UpdateValueExpression`, `DataValidationCxt`, `NoteSnippetEditorRef`, `IApplicationState`, `DbStxLockEvent`, `requests.UpdateProjectRequest`, `SkeletonField`, `CompositeDisposible`, `QueryCommandInput`, `IGDILogger`, `Vector3Keyframe`, `ExchangeOptions`, `$G.IGraph`, `interfaces.BindingInWhenOnSyntax`, `Key2`, `IFormValues`, `IFrameHeader`, `EmailConfirmationValidator`, `JSExcel`, `ts.FileWatcher`, `StructDef`, `ArrayServiceArrToTreeNodeOptions`, `EventArgs`, `PiTypeDefinition`, `Enumerator`, `NodeJS.Process`, `HsAddDataCommonService`, `SfxData`, `CommandArgument`, `ts.Expression`, `HitsCounterProps`, `IconifyIconCustomisations`, `MemberData`, `RTCStatsReport`, `DisLabel`, `FileTextChanges`, `RequestConfig`, `Trait`, `Words`, `GetAccountCommandInput`, `ListingMeta`, `SafeTransaction`, `PathAndContent`, `BranchFlagStm`, `WebElementWrapper`, `ConnectDetails`, `OperationDetails`, `Decoration`, `MessageWithoutId`, `BeforeCaseCallback`, `CLM.TextVariation`, `StoreNode`, `IReducer`, `IsoLayer`, `DinoController`, `DeleteEndpointCommandInput`, `requests.ListPoliciesRequest`, `tf.Tensor1D`, `TemplateFunction`, `TPackage`, `dia.Link`, `ResourceHelper`, `path.ParsedPath`, `TransientState`, `TInjector`, `GetGeneratorOptions`, `DeleteVpcPeeringConnectionCommandInput`, `NetworkContext`, `RBNFNode`, `ImageRequest`, `Version`, `egret.Event`, `JUser`, `GClient`, `CdsControlMessage`, `UST`, `EVMPayload`, `RawGraphData`, `Vector2`, `PlanetInfo`, `ResolverRpCb`, `objectPointer`, `AmongUsSession`, `ColorSwitchCCStartLevelChange`, `sdk.LanguageUnderstandingModel`, `RumConfiguration`, `Rx.Observer`, `Yendor.TickResultEnum`, `IUserDetails`, `ReaderPage`, `Padding`, `AccordionComponent`, `ReplayableAudioNode`, `IModuleMap`, `MessageRepository`, `IUserRequestOptions`, `MIRArgument`, `UsePaginationModelConfig`, `ResponseParams`, `ITransformerHandleStyle`, `IParticipant`, `VersionOperatorContext`, `GraphQLSchemaWithFragmentReplacements`, `TableDataProvider`, `BlokContainerUserSettings`, `EzBackendOpts`, `TEmoji`, `WaitForSelectorOptions`, `fhir.Identifier`, `ActionInterval`, `vile.Issue`, `AppWithCounterState`, `UserInputPlugin`, `ScopeableRequest`, `Sid`, `FixtureContext`, `Matrix4d`, `IRuntimePosition`, `LoadingBarsEffectsRefs`, `ITemplate`, `PollerLike`, `ValueHandler`, `anchor.web3.Connection`, `HarmajaStaticOutput`, `Touched`, `TInput`, `ExtOptions`, `EntityActionParam`, `ShaderParams`, `LogFunction`, `ParameterValueDefinition`, `IArguments`, `SingleASTNode`, `ConvertService`, `GanttDate`, `CommandClient`, `IFactory`, `Timeline`, `TargetRange`, `PkgConflictError`, `SetIconMode`, `BoxCache`, `Relationships`, `IUserDocument`, `MutationFunction`, `MessageHeaders`, `ListTagsRequest`, `StagePanelSection`, `ColorGradient`, `SupabaseClient`, `MDCAlertAction`, `UpdateDependenciesParams`, `RectModel`, `requests.ListAutonomousContainerDatabaseDataguardAssociationsRequest`, `ThroughputSettingsUpdateParameters`, `B13`, `ToastParams`, `SecretVersion`, `NavService`, `TimeScale`, `EventTypeService`, `serialization.ConfigDictValue`, `ListRequest`, `Sphere`, `AggregationMode`, `parse5.ASTNode`, `UserStorage`, `IManifestArmor`, `InfuraProvider`, `Identification`, `DeepLink`, `_DeepPartialObject`, `TooltipPayload`, `SourceConfiguration`, `CSTeamNum`, `ts.AsExpression`, `SVGGraphicsElement`, `ImageState`, `TangentType`, `NcTabs`, `BitGo`, `MatBottomSheet`, `ParameterCategory`, `AliasMapItem`, `IAGServer`, `PublishDiagnosticsParams`, `EventCreatorFn`, `QTMCounterState`, `SuccessfulMatchReport`, `CobIdentifier`, `UserSession`, `ContextTransformFieldType`, `IColumnToolPanel`, `StitchesComponentWithAutoCompleteForReactComponents`, `d.RollupChunkResult`, `IFieldSchema`, `ComponentLocale`, `BooksState`, `AudioPlayerState`, `Public`, `NavControllerBase`, `TransmartStudy`, `IStorage`, `ConversationNode`, `StorageFile`, `OrbitTransformation`, `CalendarRepository`, `AddressHashMode.SerializeP2SH`, `InboundTransport`, `CrawlContext`, `SubscriptionObserver`, `ResStatus`, `RSSI`, `RRNode`, `FindConditions`, `ApprovalPolicy`, `MDCLineRippleFoundation`, `CreateAuthorizerCommandInput`, `EvaluationResult`, `RenderBufferTargetEnum`, `DecryptedSymmetricKey`, `MentorBasic`, `EventArg`, `IUnitStoryChapter`, `ObservableMap`, `DMMF.SchemaEnum`, `FaceNameSwizzler`, `GitHubClient`, `Hour`, `UserChallengeData`, `CryptoFrame`, `VectorType`, `CreateContactCommandInput`, `LuaSymbolInformation`, `Achievement`, `Lazy`, `CreateDeploymentCommandInput`, `Password`, `CaseInsensitiveMap`, `DestinyCacheService`, `PackedBubblePoint`, `TSESLint.RuleModule`, `Ingredient`, `ListSortMembersSyntax`, `ElementResult`, `ObjectButNotFunction`, `IArgs`, `DecodedPixelMapTransaction`, `CoreUsageStatsClient`, `IndexedGeometryMap`, `SpawnASyncReturns`, `FolderService`, `Invoice`, `Pixels`, `ChoicesEntity`, `ListOpsInfo`, `IExpectedIdToken`, `DataCenterResource`, `CategoryMap`, `RPCConnection`, `ResolvedPointer`, `ColorScale`, `PackageSummary`, `EmitContext`, `MarkupElement`, `InterpolateData`, `TranslationFile`, `UploadApiResponse`, `TranslationKeys`, `PatchOperation`, `NgGridItem`, `AtomicMarketNamespace`, `DiagnosticResult`, `CollisionCategorizedKeeper`, `ColorFunc`, `HR`, `AnalyserNode`, `ts.TypeParameterDeclaration`, `IssueLocation`, `LambdaContext`, `ARMRamItem`, `ResponseWithBodyType`, `PanelNotificationsAction`, `Trail`, `RequestedServiceQuotaChange`, `GraphQLInputField`, `PointerAllocationResult`, `PackagePolicy`, `CLM.TrainScorerStep`, `SdkAudioStreamIdInfoFrame`, `MicroAppConfig`, `ApiCall`, `protocol.FileLocationOrRangeRequestArgs`, `PackageJsonWithTsdConfig`, `ModifyDBInstanceCommandInput`, `Junction`, `UiSettingsParams`, `ODataService`, `AddGatewayV1`, `JobValidationMessage`, `AccessTokenResponse`, `AbortChunk`, `ValueOptions`, `RawLogEvent`, `HomeComponent`, `WalletPage`, `CloudTasksClient`, `FieldResultSettingObject`, `HighlightData`, `StorageConfig`, `TxOptions`, `GetZoneRecordsRequest`, `ColumnWorkItemState`, `StringifiedUtil`, `WebTally`, `EmitTextWriterWithSymbolWriter`, `VueFile`, `CreateEndpointCommandInput`, `AirlineEffects`, `OrthographicCamera`, `IArticleField`, `MessageGroup`, `Manipulator`, `UISliceState`, `ExpandedArgument`, `TaskDraftService`, `FilterDataStatusValues`, `CompletionRecord`, `HandledEvent`, `CommonTableExpressionNode`, `Bank`, `CreateDBSubnetGroupCommandInput`, `SearchInputProps`, `SnapshotListenOptions`, `ReconnectDisplay`, `DockerRegistryHelper`, `ParamT`, `ChainJson`, `PnpmShrinkwrapFile`, `IMergeViewDiffChunk`, `StorageManager`, `DiffCopyMessage`, `Datastore.Context`, `CDJStatus.State`, `AddressData`, `SavedToken`, `ServerListEntry`, `ChordType`, `TypeOperatorNode`, `ChartDownload`, `ActionFilterAsync`, `DefaultRollupStateMachine`, `IntlShape`, `ButtonType`, `PreRenderedChunk`, `ExpressionReturnResult`, `DeleteConfigurationSetEventDestinationCommandInput`, `Knex.TableBuilder`, `ScaleLinear`, `ContainerBase`, `BuiltQuery`, `IComm`, `WrappedStep`, `MDCChipAdapter`, `IsNot`, `SecureRandom`, `RaycasterEmitEvent`, `requests.ListBootVolumeBackupsRequest`, `vscode.CancellationTokenSource`, `CodeBlockProps`, `ElementSession`, `AxisLabelFormatter`, `ApplyResult`, `EmulateConfig`, `IPlatformService`, `ts.ParseConfigFileHost`, `SHA384`, `avcSample`, `ReputationOptions`, `preValidationHookHandler`, `fs.Dirent`, `ISceneActor`, `ObjectiveModel`, `GEvent`, `CarouselInternalState`, `LinearSweep`, `ProColumns`, `TextEditorHelperReturnType`, `UserState`, `Moment`, `DeleteFileOptions`, `snowflake`, `MdcIconRegistry`, `IntervalType`, `EzBackendInstance`, `CommandClass`, `SuccessCallbackResult`, `DateTime`, `TestSet`, `Identify`, `OUTPUT_FORMAT`, `TensorData`, `PropertyAst`, `BlurState`, `TabBar`, `AddTagsCommandInput`, `IInventoryArmor`, `DaffCartCouponFactory`, `CmsModelFieldToElasticsearchPlugin`, `EncryptionError`, `Ray3`, `OneListing`, `PreloadData`, `ExecuteStatementCommandInput`, `ServerSecureChannelLayer`, `Salt`, `MapIncident`, `ReportingCore`, `DomainEntity`, `CreatePostInput`, `NSV`, `AtomFamily`, `SnapshotOrInstance`, `VariableDeclarationList`, `CreateFieldResolverInfo`, `SeriesUrl`, `CodeLens`, `Operator.fλ.Stateless`, `CompassCardConfig`, `RelativeTimeFormat`, `IFunctionCall`, `XPCOM.nsXPCComponents_Results`, `InterceptorOptions`, `StorefrontApiModule`, `MatchReport`, `ConfigSource`, `UnpackOptions`, `SimpleObjectRenderer`, `WidgetZoneId`, `SigningRequest`, `Defaults`, `ResponderRecipeResponderRule`, `RadioChangeEvent`, `aws.iam.Role`, `CommandLineBinding`, `DragSourceSpec`, `SelectPlayer`, `TextureDataFloat`, `HttpRes`, `TaskLifecycleEvent`, `TCallback`, `StopWatch`, `ApigatewayMetricChange`, `ODataEntityResource`, `DAL.DEVICE_ID_RADIO`, `ColumnProperty`, `RepoClient`, `DiagnosticBuffer`, `CheckPointObject`, `WesterosCard`, `ClearingHouseUser`, `UnsignedContractDeployOptions`, `comparator`, `QuickPickStep`, `ITimeline`, `Function1`, `IgnoreMatcher`, `ProtocolType`, `TextDocumentPositionParams`, `ArgumentCategory`, `TestProps`, `NewTorrentOptions`, `JPAC`, `OptionsInit`, `RowBox`, `PrinterOptions`, `DirectiveHarvest`, `WorkerMeta`, `Extend`, `RenderDebugInfo`, `CSSResult`, `next.SketchLayer`, `PddlExtensionContext`, `RectGeometry`, `IStorageOperationModel`, `GetDomainStatisticsReportCommandInput`, `SideEntityController`, `HTMLPreviewManager`, `HeatmapSpec`, `UserObjectParam`, `Biquad`, `IOptionsObj`, `SolarWeek`, `SingleProvider`, `TextOptions`, `ValidatorBuilder`, `GestureStateChangeEvent`, `LineSegment`, `DecodedInstruction`, `PointAttribute`, `Logging`, `OverlayPositionBuilder`, `BuildEnv`, `TAG_SIZE`, `EVCb`, `SimpleChanges`, `PayoutNumeratorValue`, `AuthenticationExecutionInfoRepresentation`, `ResourcesWithAttributedChildren`, `QueryAuditorAttributesRequest`, `FollowLinkConfig`, `TernarySearchTree`, `SignDoc`, `BaseTexture`, `RowTransformFunction`, `ObjectFlags`, `DimensionGroup`, `NetworkingState`, `TextElementsRendererOptions`, `MdlOptionComponent`, `SecuritySchemeObject`, `TimechartHeaderProps`, `KeyboardShortcut`, `TweetTextToken`, `WatchableFunctionLogic`, `ReactFragment`, `TypeToMock`, `BespokeServer`, `CustomersGroupState`, `core.ETHSignMessage`, `QueueReceiveMessageResponse`, `AuthReduxState`, `FlipDirection`, `MetricRegistry`, `VNodeProps`, `StateNode`, `ResolvedAtomType`, `Transpiler`, `Preposition`, `DynamoDBDocumentClientResolvedConfig`, `GroupList`, `HTTPResponseBody`, `PoxInfo`, `IDatePickerModifiers`, `VerticalPlacement`, `TaskResolver`, `SoftmaxLayerArgs`, `requests.ListWorkRequestErrorsRequest`, `Feeder`, `OperandType`, `DockerGlobalOptions`, `IApiResponse`, `ContractCallOptions`, `SelectEvent`, `CheckSimple`, `DeleteQueryNode`, `JhiAlertService`, `Phaser`, `_Identifiers`, `BroadcastEvent`, `LegacyResult`, `UsersState`, `UAObject`, `IHWKeyState`, `IVehicle`, `SetElemOverlap`, `ThyUploaderConfig`, `OutputEntry`, `SerializableRecord`, `BatchCreateChannelMembershipCommandInput`, `Rule.RuleFixer`, `ComponentsProps`, `fromSettingsActions.GetSettingModelCollection`, `TruncatablesState`, `TextureCoordinateType`, `TsConfigResolver`, `FragmentDefinitionMap`, `SVGDatum`, `BrickRenderOptions`, `DatabaseParameterSummary`, `MenuAction`, `FunctionAddInfo`, `CreateUserCommandInput`, `InputTree`, `ITransactionProps`, `Alt1`, `ReferenceParams`, `ComponentManager`, `UserActions`, `CompositeName`, `AppiumClient`, `AuthenticateEmailRequest`, `LiteColliderShape`, `LayerSpec`, `PackedTag`, `ParsedFileContent`, `SwaggerMetadata`, `ListResponse`, `ShaderSpec`, `ChangelogEntry`, `MaterialFactory`, `ExternalService`, `FolderView`, `RepoService`, `IListenerRule`, `HallMenus`, `ExpressionKind`, `StatsFieldConfiguration`, `IFeatureOrganizationUpdateInput`, `TAggregateCommit`, `InternalViewRef`, `Specialty`, `AddedKeywordDefinition`, `ApiClientResponse`, `ColumnModelInterface`, `StacksPrivateKey`, `DescribeDBClusterParameterGroupsCommandInput`, `StepperProps`, `SceneActuatorConfigurationCCSet`, `DynamicFormService`, `NotificationEntity`, `ContentTypeService`, `GeneratedFiles`, `ICandidateInterview`, `PerformanceEntry`, `ATOM`, `ResourcesToAttributions`, `IPagingTableState`, `SelectorNode`, `AuthContextProps`, `PrivilegeCollection`, `StateAB`, `TranscriptEvent`, `ReadonlyMat4`, `BasePin`, `ArianeeWalletBuilder`, `TimePanelProps`, `WithPLP`, `OptionalFindOptions`, `IPersonState`, `ButtonVariant`, `ExpressionFunctionDefinition`, `HlsEncryption`, `HiNGConfig`, `UiAtomType`, `TagProps`, `PageMetadata`, `WebCryptoDefaultCryptographicMaterialsManager`, `PipelineStageUnit`, `DeleteDataSourceCommandInput`, `DecodedTile`, `ResourceType`, `QueryParamsAsStringListMapCommandInput`, `uint8`, `GX.IndTexWrap`, `TLndConf`, `BoardSettings`, `RouterStateData`, `CategoryPreferences`, `Hmac`, `UpdateResourceCommandInput`, `DocumentInitialProps`, `TestingConfig`, `Realm.Object`, `MdcElementObserverAdapter`, `FileTreeComponent`, `INodeDetailResolverService`, `ShimFactory`, `Scrobble`, `CSSStyleSheet`, `SyncGroup`, `DebugProtocol.ConfigurationDoneArguments`, `vscode.TestRun`, `StickerExtT`, `PrivateDnsZoneGroup`, `InstallTypes`, `RgbTuple`, `GX.AlphaOp`, `RowOutlet`, `ResolverFn`, `AnimationTrackComponent`, `SlopeWallet`, `ECSSystem`, `EntityStore`, `CanvasFillRule`, `ArgOptions`, `EventPriority`, `UserSubscriptions`, `is_pressedI`, `AngularFirestoreDocument`, `ExtendFieldContext`, `TRawConfig`, `MapItem`, `DeleteFolderCommandInput`, `AsyncReaderWriterLockWriter`, `OnCallback`, `GLTF`, `ShapeType`, `TileLoaderState`, `ConfigPlugin`, `ConfigAction`, `DeckPart`, `ColumnType`, `AbstractServiceOptions`, `EngineRanking`, `SemanticTokensBuilder`, `NavSource`, `BattleDetail`, `ASScope`, `CSSValue`, `LogOptions`, `EmbedObj`, `FunctionTypeBuildNode`, `FetchableField`, `LayoutedNode`, `ServiceBinding`, `PluginImport`, `CdsIcon`, `MessageCallback`, `ReportsService`, `Agents`, `RouterProps`, `IGen`, `ControlItem`, `Flight`, `Dispatch`, `RawRestResponse`, `AbsoluteFilePath`, `CreateVolumeCommandInput`, `RectangleProps`, `NoopExtSupportingReactNative`, `PinType`, `IBufferCell`, `DescribeAccountAttributesCommandInput`, `DeleteNotificationsRequest`, `FileResult`, `BillingInfo`, `SelectModel`, `Footer`, `Triggers`, `AccountDevice_VarsEntry`, `TableMap`, `ValuedConfigurationMetadataProperty`, `EvaluationContext`, `AggsStart`, `MultipleLiteralTypeDeclaration`, `InputBoolean`, `StepProps`, `Mockchain`, `nodes.Node`, `DestroyOptions`, `HitInfo`, `AssemblyData`, `IPayload`, `Control`, `Asset`, `RunEvent`, `ProcessorModule`, `ILegacyScopedClusterClient`, `DDSTextureHolder`, `IFieldFormat`, `IRequestMap`, `IUri`, `NonNullable`, `TranslatorType`, `ProxyRule`, `Mapping`, `Dec`, `SphericalHarmonicsL2`, `StepArgs`, `LabelTable`, `SavingsManager`, `SelectionData`, `IDateStatistics`, `ElementProps`, `ConfigurableFocusTrap`, `DisabledTimeConfig`, `ExpandedEntry`, `MockDialog`, `DependencyResolver`, `IFsItem`, `EncodeOutput`, `ProjectReference`, `CartoonConfig`, `BlockBlobGetBlockListResponse`, `XanimePlayer`, `InteractionWaitingData`, `MutationRequest`, `ErrorFormatter`, `ComponentCtor`, `CrossBridgeAsset`, `TrackMapOptions`, `monaco.Range`, `KeyIcon`, `AndroidTarget`, `IJobConfig`, `ClassNameType`, `AnimationTrack`, `ResponsiveStorage`, `Channel`, `RowMap`, `FunctionImpl`, `RpcResponse`, `ServerWalletAPI`, `ApplicationQuestion`, `ScheduleType`, `StackResultsMatcher`, `JournalShowQueryParams`, `JSDocType`, `OperationGroupDetails`, `MapDispatchProps`, `PotentialPartnersState`, `InjectionToken`, `IServerError`, `ListRecommendationsCommandInput`, `BadGuy`, `IOptions`, `LitecoinjsKeyPair`, `SuiteNode`, `ExpressionAttributeValues`, `InterfaceTypeDefinitionNode`, `AutocompleteSettings`, `SendDataRequest`, `StrokeDrawingContext`, `Where`, `ITrack`, `IOutputOptions`, `Mocha.Suite`, `AppserviceMock`, `LoaderOptions`, `ObjectConsumer`, `Connex.Driver`, `LazyQueryHookOptions`, `GlobbyOptions`, `SVGAttributes`, `PortalProps`, `DragState`, `ModuleInstance`, `TranslateConfig`, `VueDecorator`, `ContextMenuService`, `ee.Emitter`, `JestProcess`, `IContainerRuntimeMetadata`, `ChannelCardType`, `CapabilitiesService`, `SavedObjectsMigrationConfigType`, `SendParams`, `InitCmdContext`, `WebGLVertexArrayObjectOES`, `GetAuthorizersCommandInput`, `IProtoTask`, `QueuingStrategy`, `EmissionsController`, `SuperAgentRequest`, `PackageResult`, `AlertDialog`, `ReferencesIdDictionary`, `IBasePath`, `ErrorCallback`, `Person_Employment`, `ViewPort`, `IMark`, `_IObjectMap`, `XcomponentClass`, `Images`, `StringValueNode`, `AttributeDerivatives`, `Queue`, `HsLanguageService`, `StoreConfiguration`, `Coda`, `AddressDTO`, `ViewBaseDefinition`, `ErrorHandler`, `GeneratorOptions`, `ListMembersCommandInput`, `DataRowItem`, `ProtocolResponse`, `Nuxt`, `LevelsActionTypes`, `NzAutocompleteOptionComponent`, `GetDatabaseCommandInput`, `FetchHandlers`, `URLLoader`, `GetRowLevelKeyFn`, `IEndpoint`, `NodeUnit`, `CheckStatus`, `MountedBScrollHTMLElement`, `TodoAppDriver`, `d.MatchScreenshotOptions`, `_1.Operator`, `Awaited`, `ResolvedTypeReferenceDirective`, `IDate`, `DiscussionEntity`, `TestEnv`, `ListAlertsCommandInput`, `PersonaId`, `CustomParameterGroup`, `XLSX.WorkSheet`, `ImportFacebookFriendsRequest`, `HLTVPageElement`, `Express.Multer.File`, `LambdaExpr`, `EditorContext`, `CollisionBox`, `Cone`, `MsgSignProviderAttributes`, `LinesResult`, `CompoundFixture`, `CategoryModel`, `ITask`, `IModelDecoration`, `ParameterChange`, `ChartScene`, `ColRef`, `FormulaOptions`, `ValueDescPair`, `ProjectsState`, `MatchPrefixResult`, `ROPCService`, `DialData`, `DaffCategoryFilterEqualRequest`, `SelectValue`, `HierarchicalItem`, `RpcRequest`, `ItiririAsync`, `OnConflictNode`, `Stripe.Event`, `PatternValueNode`, `SimpleAnalyzer`, `ActionBarProps`, `TextDrawer`, `HierarchyRectangularNode`, `InputComponent`, `ButtonOptions`, `IBackoffStrategy`, `SelectedIndexChangedEventData`, `SignInState`, `ImmutableBucket`, `ExecutionContract`, `SubscriptionResult`, `RemoteMessage`, `SymbolFormatFlags`, `SelectQuery`, `SolidityVisitor`, `Tools`, `RichEmbed`, `FacetFaceData`, `StaticSiteZipDeploymentARMResource`, `CompilerEventFileUpdate`, `GroupEventType`, `I18nUpdateOpCodes`, `FunctionMethods`, `ANDGate`, `FieldDescriptor`, `Type_Which`, `RepoConfig`, `MatchFunction`, `CalcFun`, `ShurikenParticleRenderer`, `SortCriteria`, `QRCodeSharedData`, `IPullRequestListItem`, `GherkinDocumentHandlers`, `matrix.MatrixArray`, `GfxRenderPass`, `IRGB`, `SanityDocument`, `requests.ListMfaTotpDevicesRequest`, `BasicInfo`, `ClonePanelAction`, `SimpleSignedTransferAppState`, `ScopedMemento`, `KeysRequest`, `StickyVirtualizedListState`, `ProjectStore`, `RangeAsyncIterable`, `IndexedAccessTypeNode`, `pointerState`, `d.ComponentRuntimeHostListener`, `Alternative`, `InputGenerator`, `HitTesters`, `ExtendedWebSocket`, `UseDropdown`, `dRes_control_c`, `PublisherDoc`, `TabContentItem`, `ExtendedVue`, `AttachmentData`, `CodePddlWorkspace`, `ResourceGroup`, `Real_ulong_numberContext`, `ProductFilterDTO`, `PerfState`, `SocketIOClient.Socket`, `TextEncoder`, `MockDataGenerator`, `GenerateConfig`, `HydrateFactoryOptions`, `TypeScriptService`, `Appointment`, `TransactionClientContract`, `UpdateConnectionCommandInput`, `IDict`, `States`, `QueryAccess`, `ParameterType`, `VariableGroupData`, `TreeGridAxis`, `React.Navigator`, `StyleCompiler`, `ListFlowsCommandInput`, `Availability`, `Undo`, `ProjectTilemap`, `TestParams`, `GameObjectInfo`, `DesignerTypeOption`, `ViewableRobot`, `AnnotationEventEmitter`, `StylusNode`, `ICompactPdfTextObj`, `IMatchResult`, `ScrollLogicalPosition`, `DateKey`, `IRoundResult`, `TResolver`, `Int128`, `RuleIteratorWithScope`, `BanGroupUsersRequest`, `NewPackagePolicyInput`, `Reminder`, `CacheNode`, `TrackData`, `Expected`, `GLTF.AccessorComponentType`, `ListRulesResponse`, `PuppetClassInfo`, `OptionsOrGroups`, `IHashProvider`, `ToastOptions`, `DataRecordValue`, `SGSCachedData`, `MappingTreeArray`, `AlexaLambda`, `ConcreteBatch`, `IAnnotation`, `PLSQLCursorInfosVSC`, `AlertComponent`, `EgressSecurityRule`, `VoiceAssistant`, `DeserializerContext`, `common.ClientConfiguration`, `RemoteHandler`, `ParseElement`, `JSONDiff`, `BulkInviteCommand`, `HandlebarsTemplate`, `TaskTiming`, `Deferrable`, `ItemInterface`, `RenderPlugins`, `T14`, `BasicProfile`, `NamedMouseElement`, `LinkTextLocator`, `apid.ManualReserveOption`, `PropertyAssignments`, `Sound`, `CreateMeetingWithAttendeesCommandInput`, `ISolution`, `ShuffleIterator`, `AudioSource`, `JsonAst`, `GameMarks`, `IKeyValue`, `ReflectedValueType`, `NodeDef`, `ForecastSeriesContext`, `CallHierarchyDeclaration`, `Introspector`, `tStartupOrShutdown`, `Sign`, `Highcharts.RangeSelectorButtonsOptions`, `AkimaCurve3d`, `EvaluatorOptions`, `IMediatorConfigurator`, `VirtualRows`, `VideoStreamIdSet`, `DurationEvent`, `DefaultClientMetricReport`, `PlyAdapter`, `PanResponderGestureState`, `ColumnReference`, `PgAttribute`, `MeshInfo`, `SerializeOptions`, `ContactSubscription`, `Demo`, `Review`, `WebGLContextWrapper`, `fabric.Object`, `ComponentTreeNode`, `RoleData`, `TestCaseInfo`, `StringOrNumberOrDate`, `webpack.Compiler`, `ThemeManager`, `EntryId`, `WorkerArgs`, `CodeActionParams`, `TokenAccount`, `ReadableData`, `ElementAspectProps`, `Segment1d`, `VariableModel`, `SigningMethod`, `APIRequest`, `requests.ListBucketsRequest`, `requests.ListCrossConnectGroupsRequest`, `VocabularyOptions`, `TestResultContainer`, `interfaces.Unbind`, `EventKeys`, `Progress.INeonNotification`, `messages.Scenario`, `RequestEvent`, `IGhcMod`, `UIError`, `ThemeExtended`, `Fig.Option`, `ResponseInterface`, `Urbit`, `IFeatureSet`, `ScheduleDoc`, `ThemeModeEnum`, `AthenaRequest`, `DisplayMarker`, `TestDataSource`, `MarkerInfoNode`, `CW20Instance`, `ICosmosTransaction`, `AnchorMode.Any`, `Droppable`, `SerializedObject`, `Behavior`, `LangOptions`, `DiffOptions`, `LightInstance_t`, `FactoryArgs`, `ParsedQRL`, `DirectoryInode`, `OnCleanup`, `Point3dArrayCarrier`, `ColumnDef`, `MDCListIndex`, `Browser.Interface`, `UtxoInfoWithSats`, `PureTransition`, `ProcessRequirement`, `Sigma`, `ListTargetsForPolicyCommandInput`, `PrayerTimes`, `VObject`, `SeekRange`, `RenderFlags`, `GetModelCommandInput`, `MockOtokenInstance`, `BaseTranslatorService`, `ICarsRepository`, `StateInstance`, `TS`, `RuleTransition`, `TracePrinter`, `HealthpointLocationsResult`, `NavigationEdge`, `TileMapLayerPub`, `TSQuerySelectorNode`, `App.webRequest.IRequestProcessor`, `bigint`, `ComparisonOperator`, `SummaryST`, `KeyInDocument`, `GX.TexPalette`, `ListFunctionsCommandInput`, `BaseMultisigData`, `ListCertificatesRequest`, `Http3PriorityFrame`, `BackupJSONFileLatest`, `CreateService`, `DiscordStore`, `WordCloudSettings`, `DAL.KEY_X`, `ECSqlValue`, `FileInode`, `RefundPayerStore`, `MemoOptions`, `AppModule`, `GlobalEnv`, `AssetReference`, `CLM.EntityBase`, `XmlParserNode`, `ConsumedCapacity`, `DescribeSchemaCommandInput`, `IGetProjectsStatistics`, `IValidationContext`, `ProxyReducer`, `mongoVisitor`, `VisibleTreeNodes`, `ClusterCollection`, `SessionsConfigSchema`, `TutorialModuleNoticeComponent`, `FileFilter`, `RouterSource`, `GenericIdModel`, `DynamoDbFileChange`, `DangerInlineResults`, `GreetingService`, `Frakt`, `CredValues`, `StoreAction`, `InjectedConnector`, `CachedNpmInfoClient`, `TypedTensor`, `CoordinateType`, `AdalService`, `GetPerspectiveOptions`, `t_6ca64060`, `Viewer.Viewer`, `ROLES`, `TreeBudgetEvent`, `ResponserFunction`, `MockData`, `LogAnalyticsCategory`, `FS`, `CompBitsValue`, `DropPosition`, `ZxBeeper`, `TaskExecutionSchema`, `VideoStreamOptions`, `ChannelJoin`, `DrawBufferType`, `DiagnosticSeverityOverrides`, `IUrlResolver`, `PadData`, `ShorthandPropertyAssignment`, `PokemonService`, `ValueReflector`, `SubscriptionAlreadyExistFault`, `AuthProviderProps`, `MathsProcessor`, `IterationStatement`, `Reshape`, `ModuleModel`, `MaybeTypeIdentity`, `TaskRun`, `ReifiedType`, `CustomFeatureConfig`, `LoadEventData`, `OverlayContainer`, `addedNodeMutation`, `CliHttpClientOptions`, `MediaStreamsImpl`, `RequestsService`, `KamiConfig`, `DNSAddress`, `UpdaterService`, `BotTags`, `ElementMaker`, `RenderingOptions`, `NotifyService`, `SearchQueryCtx`, `Json.ParseResult`, `InstancePrincipalsAuthenticationDetailsProviderBuilder`, `DOMQuery`, `GluegunToolbox`, `IBoundingBox`, `CacheOptions`, `PageDensity`, `eventType`, `FunctionalLayout`, `PluginValidateFn`, `NewBlock`, `ActorRenderModeEnum`, `DurationLike`, `TLBounds`, `IPositionComponent`, `UpdateUserRequest`, `PS`, `WalkNode`, `DeleteAccountsValidationResult`, `RuntimeIndex`, `LongOptionName`, `angular.IQService`, `CloningRepository`, `CurveCollection`, `IComparatorFunction`, `ParamSpecEntry`, `next.AppLayer`, `RTCConfiguration`, `BitbucketPipelines`, `ResponseComment`, `DemographicCounts`, `CameraState`, `PanelPackage`, `ProductControlSandbox`, `Http3Header`, `ProtocolRequestType`, `DeleteTemplateCommandInput`, `GameObject`, `Md.List`, `d.StyleCompiler`, `KeychainCredential`, `SubnetMapping`, `CreateMediaDto`, `Decibels`, `TransferRequest`, `ImportsAnalyzerResult`, `VdmComplexType`, `OpenSearchDashboardsSocket`, `Img`, `PostService`, `FileRange`, `BinarySwitchCCSet`, `CtrNot`, `ProviderConfig`, `StopDBClusterCommandInput`, `TextEditorViewColumnChangeEvent`, `FrameManager`, `ServerMap`, `GitHubLocation`, `AppStoreReplay`, `TableInterface`, `StatusBarAlignment`, `AlgBuilder`, `AsyncFactory`, `OhbugUser`, `Codeword`, `PDFHeader`, `MediaService`, `TreeNodeInfo`, `NotificationsServiceStub`, `Shadows`, `MythicAction`, `IFeatureFlag`, `EngineOptions`, `TestFile`, `SPNode`, `OBS`, `CeloTokenType`, `UUIDType`, `AddApplicationOutputCommandInput`, `LocalStorage`, `Events`, `parse5.DefaultTreeElement`, `IRemoteTargetJson`, `Indentation`, `SearchActions`, `GaugeSettings`, `SchemaMap`, `IDBOperator`, `SysTask`, `UserLogin`, `ITodo`, `requests.GetProjectRequest`, `AxiosError`, `HookProps`, `IOptionSelectText`, `TranslationState`, `Calendar`, `HTMLHeadElement`, `ObjectPage`, `OutboundTransport`, `ErrorResponse`, `CommBroker`, `CommonIdentity`, `GeometryCollection`, `requests.ListAutonomousVmClustersRequest`, `MerchantEntity`, `SandDance.types.Column`, `EditableTextStyle`, `EventPayload`, `PagedParamsInput`, `WebGLTimingInfo`, `ThemeValueResolver`, `HashChangeEvent`, `InlineConfig`, `types.Message`, `BenchmarkResult`, `ProblemModel`, `DetailsProps`, `ConnectionMetrics`, `IRouteTable`, `DashLowerthirdNameInputElement`, `TaskExitedEvent`, `MDCBaseTextField`, `IStatus`, `TestServerHost`, `CustomRenderer`, `WorkRequestLog`, `Artwork`, `NavigationExtras`, `ts.ConciseBody`, `serialization.ConfigDict`, `ActiveOverlay`, `Episode`, `OperationContext`, `cp.ForkOptions`, `DescribeReportDefinitionsCommandInput`, `Portal`, `RecurringBillPeriod`, `FieldTypeMetadata`, `RendererService`, `IUserModel`, `UpdateEnvironmentCommandInput`, `IHost`, `ts.TypeNode`, `PrivateIdentifierInfo`, `CreateMessageDto`, `SendTable`, `CallMethodRequestLike`, `d.PlatformPath`, `Deep`, `NVMJSONNodeWithInfo`, `IProfileLoaded`, `AzExtParentTreeItem`, `Private.PaintRegion`, `PushRequest`, `WebsocketProvider`, `ConvertIdentifier`, `NormalModule`, `CodeEditorMode`, `LockState`, `CfnExpressionResolver`, `TypeList`, `IPQueueState`, `RegExpMatchArray`, `Surface`, `SystemMessage`, `MetadataORM`, `IMessageRepository`, `PaletteOptions`, `FilterStatusValues`, `IHandleProps`, `IDBEndpoint`, `ACrudService`, `ViewInfo`, `MyComponent`, `LineSide`, `requests.ListWorkspacesRequest`, `StructuredType`, `EmojiService`, `RecommendationCount`, `CallExpressionArgument`, `Generatable`, `CspConfig`, `Features`, `TimeHistoryContract`, `FaceletCubeT`, `CipherResponse`, `IInputHandler`, `AppClientConfig`, `SwitchFunctorEventListener`, `IEqualityComparer`, `IterableActivity`, `TransportSession`, `ExpressionResult`, `WebSiteManagementModels.SiteConfig`, `Documentation`, `d.ComponentCompilerListener`, `IdMap`, `ToolItemDef`, `React.Route`, `ExpectedNode`, `ExpressionFunctionParameter`, `P9`, `Script`, `MockEventListener`, `VideoTexture`, `MagicString`, `LexoInteger`, `ClientRenderOptions`, `EPickerCols`, `cdk.StackProps`, `StoreConstructor`, `ReactClientOptionsWithDefaults`, `WebProvider`, `DataStateClass`, `CLR0`, `Graphql`, `SegmentEvent`, `IGameUnit`, `TableImpl`, `WebGLShaderPrecisionFormat`, `AuthHeaders`, `UpdateGatewayInformationCommandInput`, `CreateApplicationRequest`, `TaskProvider`, `DataProps`, `RestPositionsResponse`, `DeviceMetadata`, `HsDrawService`, `Vec3`, `BuildMiddleware`, `IDocumentManager`, `AppContextType`, `IChip`, `Mirror`, `RouteNode`, `AccountFacebook`, `WebpackConfiguration`, `Monad2C`, `HeroAction`, `UploadxService`, `TouchData`, `FormControlConfig`, `SlicedExecution`, `IBlockchainsState`, `StrictEventEmitter`, `PartitionBackupInfo`, `TextRangeCollection`, `BranchSummary`, `ITaskData`, `ProgramInfo`, `Registers`, `DOMPoint`, `CodeActionCommand`, `IMidwayBaseApplication`, `IMidwayBootstrapOptions`, `OHLCPoint`, `LogInfo`, `IStringDictionary`, `ArtifactItemStore`, `BufferData`, `OnEffectFunction`, `Rental`, `GfxAttachmentState`, `AjaxResponse`, `DeployResult`, `PublicAccessBlockConfiguration`, `LeanDocument`, `IOperatorIdentifier`, `InterfaceWithValues`, `NodeJS.WritableStream`, `NodeSet`, `MDCShapeScheme`, `StatusContext`, `Tax`, `GlobalsSearch`, `Post`, `Keyword`, `IEquipment`, `IQueryState`, `MDCLineRippleAdapter`, `AdminGameEntity`, `CustomSeriesRenderItemAPI`, `IOidcOptions`, `Simplify`, `IColumnRelationMetadata`, `EntityCollectionReducer`, `Vue`, `SwitcherState`, `SubjectInfo`, `SubscriptionClass`, `IMiddlewareGenerator`, `QCBacktest`, `cc.Prefab`, `ReadOptions`, `ICharAtlasConfig`, `GpuInformation`, `TagsFilter`, `IKeyQuery`, `CsvGenerator`, `Empty`, `ComponentInterface`, `UserView`, `ReactTestRenderer`, `GetDatabasesCommandInput`, `CurrencyToValue`, `GLclampf3`, `ExtractedCodeBlock`, `CanvasSpace`, `TNSCanvas`, `BroadcasterService`, `RobotApiRequestOptions`, `FrameStats`, `ValProp`, `CreateDirectoryCommandInput`, `GLRenderingDevice`, `FlowListener`, `RedisModuleOptions`, `FragmentedHandshake`, `ProcessEvent`, `GeoPolygonFilter`, `ZeroPadding2DLayerArgs`, `TestScriptOptions`, `Pkcs12ReadResult`, `ProductState`, `SpeakerWithTags`, `AnnotatedFunctionInput`, `PointItem`, `sdk.RecognitionEventArgs`, `StandardTask`, `AbsolutePath`, `IDataViewOptions`, `WebGLRenderbuffer`, `PDFDocument`, `Phaser.Geom.Point`, `InvestDateSnapshot`, `ContentReader`, `UiMetricService`, `QueryOutput`, `P6`, `EthereumNetwork`, `RopeBase`, `MongoRepository`, `GaussianDropoutArgs`, `Reconciliation`, `requests.ListInstancePoolInstancesRequest`, `GetUsageStatisticsCommandInput`, `VertexLayout`, `SubscriptionEnvelope`, `GherkinDocumentWalker`, `IRun`, `ApiPromise`, `BehaviorNode`, `CompletedLocalIpcOptions`, `CompositeBatch`, `TextEditAction`, `SearchFacetOperatorType`, `CompileTarget`, `ListenCallback`, `Assertions`, `TRPCError`, `StepDefineExposedState`, `VariableNames`, `BaseType`, `Declarator`, `ModdedBattleScriptsData`, `JQueryEventObject`, `ScreenshotService`, `ObjectKeyMap`, `NodeJS.Timeout`, `CommittersDetails`, `TemplateCompiler`, `CronConfig`, `IOctreeObject`, `GADRequest`, `RecursiveStruct`, `KubeConfiguration`, `EncryptionContext`, `ParsedCommandLine`, `CapnpVersion`, `SingleSigSpendingConditionOpts`, `InternalServiceErrorException`, `RouteOptions`, `TestPlan`, `Fragment`, `IFileSystemCreateLinkOptions`, `DatabaseFeatureOptions`, `ToolbarDropdownButtonProps`, `QueryResponse`, `FileHandler`, `OffsetRange`, `VirtualMachineRunCommandUpdate`, `DescribeInstanceAttributeCommandInput`, `HistoryInstructionInfo`, `TutorialDirectoryNoticeComponent`, `ComponentHolder`, `IStepInfo`, `ɵAngularFireSchedulers`, `TSFunDef`, `ThyAbstractOverlayRef`, `InputMap`, `CanvasFontWeights`, `CheckerType`, `DbCommand`, `CombatGameState`, `IViewRegionsVisitor`, `EthersBigNumber`, `UpgradeConfigsParams`, `messages.SourceMediaType`, `TileAttrs`, `BckCtrlData`, `AppAction`, `Anchor`, `GameId`, `SiteService`, `BaseArtifactProvider`, `TabularRows`, `PromiseRejectedResult`, `UserCreateInput`, `IViewbox`, `AuthenticationResult`, `SimpleGit`, `Traversable`, `esbuild.OnLoadArgs`, `WsClient`, `ImageAssetService`, `DmarcState`, `LegacyDateFormat`, `NamedArgTypeBuildNode`, `PropertyDataChangeEvent`, `MethodsOrOptions`, `DeployedBasset`, `ts.ExpressionStatement`, `SdkSubscribeAckFrame`, `ActivitySourceDataModel`, `GetResourcePolicyCommandInput`, `IResolvedUrl`, `xmlModule.ParserEvent`, `IFileSystem`, `SimpleWallet`, `IPathsObject`, `StockState`, `ScreenSize`, `FormControlName`, `SimNet`, `ChartRef`, `requests.ListNetworkSecurityGroupVnicsRequest`, `ResolvedId`, `NetworkInterfaceInfo`, `TextOffset`, `CookieAttributes`, `IPCMessage`, `SceneView`, `FastifyAdapter`, `http.Server`, `TempFlags`, `GadgetPropertyService`, `Painter`, `SplitField`, `JIterator`, `WorkerMessage`, `MigrationContext`, `FreeCamera`, `TaskConfig`, `Telegraf`, `SKU`, `WalletGroupTreeItem`, `IItemTree`, `OCSpan`, `Apollo.MutationHookOptions`, `Eyes`, `K.IdentifierKind`, `MockDirective`, `MockProvider`, `TJS.Definition`, `IModalState`, `DeleteDBClusterCommandInput`, `TableRowPosition`, `IQuiz`, `requests.ListOdaInstancesRequest`, `IAppOption`, `PkGetter`, `ConnectedProps`, `monaco.Position`, `U8Node`, `IndicatorProps`, `ExecutionPlanImpl`, `StepBinary`, `PluginType`, `QuickAlgoLibrary`, `BackstageItem`, `MonthYearDate`, `VideoFrameProcessorPipelineObserver`, `SubmitProfile`, `ExtraDataModel`, `PBRMaterial`, `RequestInfoUtilities`, `IFlowItemComponent`, `LogTracker`, `OverrideCommandOptions`, `AttributeTableData`, `IDynamicPortfolioColumnConfig`, `PossiblyAsyncIterable`, `XListNode`, `ColumnIndexMap`, `QueryResults`, `Ec2MetricChange`, `InstanceWrapper`, `StackLine`, `RtorrentTorrent`, `DirectiveBinding`, `DescribeFleetAttributesCommandInput`, `Fixed18`, `PositionType`, `Blending`, `IRenderContext`, `AllowedParameterValue`, `IBlockData`, `OAuth2Client`, `OperationObject`, `SVGUseElement`, `CreateSubscriptionRequest`, `SystemErrorRetryPolicy`, `NoteSequence`, `Traced`, `CompatibleDate`, `TypeUtil`, `ICustomer`, `MarkerRange`, `UsersAction`, `Journal`, `TestService`, `PerformanceStatistics`, `DeleteDBInstanceCommandInput`, `Union2`, `ProjectsStore`, `InstantiationNode`, `CssToEsmImportData`, `SFDefaults`, `AppContainer`, `MulticallRequest`, `IMatchableOrder`, `PartitionConfig`, `Svg`, `UIApplication`, `ResolverRule`, `GlTfId`, `Transcript`, `BumpInfo`, `TaskSchedule`, `EnumDescriptorProto`, `S2DataConfig`, `React.ComponentPropsWithoutRef`, `GfxRenderInstList`, `MutableVideoPreferences`, `DescribeSecurityProfileCommandInput`, `Logo`, `Telemetry.TelemetryEvent`, `MeetingCompositePage`, `ListAction`, `ChannelTokenContract`, `Expansion`, `DirectionMode`, `LQueries`, `AuthCredentials`, `GenerateResponse`, `IChamber`, `IdentifierContext`, `PreviousSpeaker`, `SessionType`, `TerraformStack`, `NSURL`, `SpecList`, `CandleStick`, `StageSwitchCtrl`, `QlogWrapper`, `ApplicationTokenCredentials`, `AppointmentId`, `EpicSignature`, `TypeOf`, `IDocument`, `App.services.IHttpChannelService`, `TokenConfig`, `tslint.RuleFailure`, `FileSystemHelper`, `SettingsRootState`, `MdcTabScrollerAlignment`, `DirectoryUpdate`, `CustomAtom`, `CollisionInfo`, `AbstractUserProxy`, `TransformPluginContext`, `TabNavigationBase`, `ScriptTask`, `PointComposition`, `FromSchema`, `IAmazonLoadBalancer`, `Values.ReadyValue`, `IAnimationKey`, `FormControlState`, `NgxFileDropEntry`, `ListAssociatedResourcesCommandInput`, `PlayerState`, `TaskComponentState`, `StaticService`, `AccessTokens`, `PiEditConcept`, `TestComponent`, `PrintLabel`, `NlsBundle`, `ParsedTag`, `IApplicableSchema`, `SchemaEntry`, `CacheFileList`, `UberPBRMaterial`, `TranspileOptions`, `FuzzyLocale`, `ObjectContainerParams`, `NumberAttribute`, `ApolloQueryElement`, `IRelease`, `ServerHello`, `InputElement`, `FormInternal`, `FirebaseFirestore.Query`, `ErrorConstructor`, `ObjectListResult`, `JwtConfigService`, `Execution`, `AssociatedName`, `TRequest`, `ThyAutocompleteRef`, `CommandMetadata`, `ActivatedRoute`, `MultiStats`, `BaseTypes`, `IClientInteraction`, `GLM.IArray`, `GetUserInfoQuery`, `IAsfObjectHeader`, `JSONSchemaStore`, `ListChannelBansCommandInput`, `StepFunctions`, `Algebra.TripleObject`, `Ecs`, `CSSSnippet`, `HoldSettings`, `Ethereum`, `StringLiteralNode`, `PositionGrid`, `MDCRippleFactory`, `Jsonified`, `_app`, `IAuthFormContext`, `QueryObserverResult`, `BlogEntry`, `CompositeCollider`, `TypeVblDecl`, `LSConfigManager`, `AbortSignal`, `GlobalMaxPooling1D`, `FileParseResult`, `ClearCollections`, `FeederData`, `InterfaceWithConstructSignatureOverload`, `OperationCallbackArg`, `PutFileOptions`, `TreeSelectOption`, `ISqlite.SqlType`, `InitializeServiceCommandInput`, `FurParam`, `Either`, `Macro`, `ListAnswersCommandInput`, `ElementEvent`, `StaticSiteUserProvidedFunctionAppARMResource`, `Bug`, `MarkInterface`, `ThyDragOverEvent`, `UhkDeviceProduct`, `ApiController`, `RawSourceMap`, `DriftConfig`, `TradeFetchAnalyzeEntry`, `CategorizedClassDoc`, `ThyPopoverContainerComponent`, `RequestResponder`, `VersionVector`, `Eq`, `NodeDocument`, `LocalStorageAppender`, `ListMenu`, `messages.Duration`, `DanmakuDrawer`, `GraphQLObjectType`, `SourceOffset`, `ContainerDefinition`, `PrismaClientConstructor`, `TVShow`, `LicensingPlugin`, `StoredOrder`, `IBlockOverview`, `InMemoryEditor`, `ChartProps`, `UserRecord`, `CustomUser`, `ParseState`, `debug.Debugger`, `OpenSearchdslExpressionFunctionDefinition`, `ListRegistriesCommandInput`, `Types.CodeGenerator.CustomGenerator`, `FirstMate.Grammar`, `MockCSSStyleSheet`, `IValidationOptions`, `SeriesDataType`, `IContentVisitor`, `IGlobal`, `IStateGlobal`, `Workshop`, `TKey1`, `BuilderCanvasData`, `MessageService`, `ESLCarouselSlide`, `Refable`, `Simulation3D`, `FolderUpload`, `TransportRequestOptionsWithOutMeta`, `TableSchemaDescriptor`, `ElementQueryModifier`, `PDFDocumentProxy`, `TypeFlags`, `NetworkVersion`, `BackendConfig`, `ODataResource`, `SHA3`, `CRS`, `FeatureDescriptor`, `TeamDocument`, `GLTFFileLoader`, `BufferVisitor`, `AccessTokenRequest`, `ProxyRulesSubscription`, `ValidatePurchaseResponse`, `CommandItemDef`, `UpdateAction`, `CloudFormationClient`, `LifecycleState`, `RowInfo`, `SvgToFontOptions`, `MIRVirtualMethodKey`, `EmitterContext`, `IUiAction`, `IDeployment`, `PagingOptions`, `IDBTransaction`, `DataViewField`, `FeatureStabilityRule`, `DocumentedType`, `TransactionDescription`, `AMM`, `RenderQueue`, `ProofService`, `SummaryArticle`, `TopicsMap`, `AccountWithAll`, `REPL`, `SwiperProps`, `XmlSerializerOptions`, `ICacheConfig`, `TranslationUnit`, `WrappedEntity`, `ModelType`, `Markup`, `ServerType`, `PermissionTree`, `StreamEmbed`, `AlertServicesMock`, `DevicePixelRatioObserver`, `TooltipValueFormatter`, `SQSEvent`, `IKeyboardEvent`, `ServiceWorkerGlobalScope`, `TDiscord.TextChannel`, `Sudo`, `ComponentInternalInstance`, `SelectOptionComponent`, `TiledTMXResource`, `ParameterObject`, `GenesisConfig`, `ActiveErrorMessage`, `OnPostAuthHandler`, `ForgotPassword`, `PopupProps`, `RuntimeMappings`, `ForwardRefRenderFunction`, `ITimeOff`, `ArrayOperation`, `GetFunctionCommandInput`, `GetBucketPolicyCommandInput`, `DomainEventSubscriber`, `InputLink`, `DataColumnDef`, `SceneGraphNodeInternal`, `Position3DObject`, `ProgramInput`, `PayableTx`, `Entrypoint`, `DependencyItem`, `UserAccount`, `Basset`, `SubtleButton`, `DeserializeAggConfigParams`, `HTMLDOMElement`, `Highcharts.AnnotationsOptions`, `ResolvedUrl`, `RequestInfo`, `AbstractParser`, `KeyboardListenerAPI`, `PageData`, `PredictablePickleTestStep`, `GLbitfield`, `MatchersObject`, `multiPropDiff`, `MediaQuery`, `NormalizedScalarsMap`, `TlcCode`, `ComponentCompilerState`, `requests.ListSecurityAssessmentsRequest`, `TagsObject`, `SEGroup`, `Bits`, `ScreenElement`, `InsightLogicProps`, `INameDomainObject`, `ObjectBinding`, `KeyState`, `PublicShare`, `provider`, `DefinitionInfoAndBoundSpan`, `Rational`, `WorkflowHooks`, `NumberDataType`, `ExportNamedDeclaration`, `BreakpointKey`, `VpcSecurityGroupMembership`, `ConsensusContext`, `ISceneDataArray`, `ExpressionFunctionOpenSearchDashboards`, `IAppEnvVar`, `DateOrDateRangeType`, `StyledComponentWithRef`, `VirtualDevice`, `MinecraftVersion`, `CommandType`, `StyleUtils`, `OnSuccess`, `PartyName`, `FooState`, `FsApi`, `Modification`, `ICData`, `ChangeNode`, `IPoint`, `UseSocketResponse`, `DomainEventClass`, `MessageEmitter`, `HTMLTitleElement`, `requests.ListSoftwareSourcesRequest`, `MFARequest`, `DrawingId`, `polymer.Base`, `EvmAccount`, `SocketIO`, `cheerio.Element`, `FirestoreForm`, `ChartUsage`, `CollectionFn`, `fromReviewerStatisticsActions.GetReviewerStatisticsResponse`, `RichTextProps`, `AxisType`, `FrameContainer`, `FaasKitHandler`, `CloseEvent`, `FiniteEnumerableOrArrayLike`, `Package.ResolvedFile`, `MessageSignature`, `MessagingService`, `MinecraftVersionBaseInfo`, `TestResult`, `IPCMessagePackage`, `HotObservable`, `server.Position`, `DictionaryFileType`, `MonitorModel`, `MutationConfig`, `Mocker`, `ArgumentsHost`, `ProcessedTransaction`, `ViewerConfiguration`, `AssertStatic`, `ParsedArgv`, `FavoriteTreeItem`, `ConnectionNode`, `CreateEnvironmentCommandInput`, `SubscriptionOption`, `AnnotationTooltipState`, `ResourceObject`, `SwitchCase`, `InlineFieldDescriptor`, `ActiveSelection`, `ResolvedDependency`, `DisplayNode`, `ApplicationTargetGroup`, `RequestFn`, `DemoFunction`, `Formats`, `LogHook`, `HttpApi`, `QueryAllParams`, `requests.ListZonesRequest`, `CommandLineConfiguration`, `CreateUserRequest`, `Bidirectional`, `GeoUnitDefinition`, `VehicleInfo`, `InstalledClock`, `ConstantJsExpr`, `CLR0_ColorData`, `DecimalArg`, `VgAPI`, `NodeFactory`, `LogMessage`, `ReferenceDescription`, `LogParse`, `DescribeEventSubscriptionsMessage`, `PickRequired`, `Events.exitviewport`, `SafeAreaProps`, `RendererType`, `RawSavedDashboardPanelTo60`, `TypeaheadState`, `SelectorType`, `IConnectionFactory`, `RedactChannelMessageCommandInput`, `ChangelogJson`, `TimelineTotalStats`, `UserDataService`, `SvgTag`, `PluginRevertAction`, `RequestListener`, `Glyph`, `ScannedFeature`, `GfxPass`, `CoreState`, `Proc`, `UpdateExceptionListItemSchema`, `CreateJobTemplateCommandInput`, `RuleMeta`, `MediaQueries`, `Term`, `_HttpClient`, `tf.fused.Activation`, `RuntimeContext`, `DetectedLanguage`, `DaffCategoryFactory`, `Delaunay`, `Matches`, `ICommandHandler`, `MaybeDate`, `IndexUUID`, `requests.ListChannelsRequest`, `TickViewModel`, `SelectorDatastoreService`, `CanvasIcon`, `PublicIdentifier`, `Moniker`, `ComboBoxMenuItemGroup`, `LanguageServiceDefaults`, `ResolverProvider`, `DependencyPins`, `LogAnalyticsSourceFunction`, `CurrentAccountService`, `BreadcrumbContextOptions`, `IPartitions`, `FormCookbookSample`, `IMenuItemProps`, `ExportMap`, `CartEntity`, `Simple`, `StringOptions`, `requests.ListClustersRequest`, `ComplexType`, `CohortType`, `StateUpdater`, `GitOutput`, `DocType`, `AggsState`, `JGOFNumericPlayerColor`, `WirePayload`, `SignedMultiSigContractCallOptions`, `IConsumer`, `DataRange`, `tensorflow.IFunctionDef`, `ResolveFn`, `NamedExports`, `Pane`, `LESSParser`, `WiiSportsRenderer`, `PhysicalKey`, `HSD_TEInput`, `IRect`, `IDimension`, `DirtyStyle`, `ArrayType2D`, `HeritageClause`, `NativeFunction`, `SSOLoginOptions`, `PortalPoller`, `PDFBool`, `IExistenceDescriptor`, `FsItem`, `MutableRef`, `UseQueryResponse`, `PricePretty`, `ConversationV3`, `DebtItemInterface`, `NestedContentField`, `PostMessageOptions`, `FbForm`, `IRouteItem`, `LogEvent`, `BigNumberFive`, `AbstractObject3D`, `commander.Command`, `AsyncSourceIterator`, `VStackProps`, `HorizontalAnchor`, `ExchangePriceQuery`, `FtrConfigProviderContext`, `IntervalOptions`, `OAuthUserConfig`, `IXulElementSpec`, `PopoverPlacement`, `RootCompiler`, `SemanticTokenData`, `NameMap`, `MapPlayer`, `SVType`, `StylableSymbol`, `IRenderable`, `LayoutPaneCtrl`, `HttpPayloadTraitsCommandInput`, `WebSocketProvider`, `SaveGame`, `UpdateSchemaCommandInput`, `ListEndpointsCommandInput`, `QueryTree`, `IManifestBindArtifact`, `AuditService`, `GraphQLModulesModuleContext`, `PrimType`, `PubPointer`, `ConfigPath`, `NgrxAutoEntityService`, `HdPrivateNodeValid`, `ObjectPattern`, `CompositeCollection`, `GlobalJSONContainerStorage`, `AssetResolver`, `TooltipType`, `Forward`, `RepositoryFactory`, `ServerErrorInfo`, `CertificateManager`, `requests.ListIPSecConnectionsRequest`, `SlideComponent`, `RenderStatus`, `Git.GitVersionDescriptor`, `NwtExtension`, `CompressionOptions`, `TagMapper`, `HTMLObjectElement`, `ModuleThis`, `EndRecordingRequest`, `ImportOrExportSpecifier`, `BitStream`, `AuthInterface`, `MockHashable`, `NumberEdge`, `DeclarationStatement`, `ReuseTabService`, `TCase`, `MagentoCart`, `styleFn`, `BlobServiceClient`, `TypeDefinitionParams`, `ITemplatizedCard`, `TimeResolvable`, `LinearLayout`, `net.Server`, `DataLoaderOptions`, `TLMessage`, `FakeMetricsCollector`, `Music`, `ThyGuiderRef`, `MoveCommand`, `FilterContext`, `CompletionItemProvider`, `ResourceTimelineViewWrapper`, `backend_util.TypedArray`, `PersistentCache`, `CraftTextBlock`, `IShapeBase`, `ObjectShape`, `FeedFilter`, `MacAddressInfo`, `ElementStylesModifier`, `ResponseHeader`, `requests.ListGrantsRequest`, `FocusableElement`, `ValueMapper`, `ITimeOffCreateInput`, `ApiEnumMember`, `FeatureContext`, `InputNodeExpr`, `CustomerAddress`, `LayerArrays`, `Cancellable`, `TsoaRoute.Models`, `ListDatasetImportJobsCommandInput`, `Exact`, `ListDatasetsResponse`, `ListJobsCommandInput`, `IntrospectionQuery`, `PrefixUnaryExpression`, `EquivMap`, `ASTTransformer`, `BottomSheetParams`, `ViewRegionInfoV2`, `WFDictionaryFieldValueItem`, `PerspectiveDataLoader`, `IOptionsFullResponse`, `CartService`, `TmdbTvDetails`, `messages.TestStep`, `FishSprite`, `FunctionAnnotationNode`, `DataViewBaseState`, `IOpenApiImportObject`, `DappInfo`, `requests.ListClusterNetworksRequest`, `IMediatorMapping`, `Delay`, `NotebookFrameActions`, `ChipsItem`, `ResultEquipped`, `WriteGetter`, `MockNgZone`, `MonthAndYear`, `KeyListener`, `ReferenceInfo`, `DatabaseTable`, `ControllerInstance`, `PostList`, `TestCollection`, `CompilerEventFileAdd`, `ThemeProps`, `LoggerParameters`, `ToolkitInfo`, `t.Comment`, `PushNotificationData`, `MapLayerSource`, `SyncPeriod`, `Set`, `PropertyModel`, `NetworkProfile`, `VectorTileDataSource`, `CompiledResult`, `SelectAction`, `BaseStruct`, `JsonFormsAngularService`, `FontNames`, `WebGLContext`, `P`, `RootOperationNode`, `LoginTicket`, `ReboostPlugin`, `ActivationLayerArgs`, `MarkBuilder`, `HTTPClient`, `KeyMapping`, `IncomingWebhook`, `UnionMember`, `ConfigFile`, `WritableStreamBuffer`, `ApiTypes.Groups.MessagesById`, `IHttpResponse`, `StyleResourcesFileFormat`, `DeleteAssetCommandInput`, `AuthProvider`, `MyCustomObservable`, `Activity`, `SlaveTimeline`, `GenericLayout`, `InterfaceAliasExport`, `Car`, `PrerenderUrlRequest`, `QueryRequest`, `GitUri`, `RegEntity`, `GetSettingSuccessCallbackResult`, `SavedObjectsResolveImportErrorsOptions`, `QComponent`, `Field_Slot`, `DeleteClusterCommandInput`, `Parser`, `ColumnRow`, `typescript.CompilerOptions`, `AppDefinitionProps`, `ParameterTypeModel`, `W2`, `TransformComponent`, `AsyncCallback`, `HydrusFile`, `BoundSideType`, `Info`, `DescribeMLModelsCommandInput`, `SyscallManager`, `RenderingDevice`, `WorkflowStepOutputModel`, `GX.IndTexScale`, `CeramicSigner`, `SdkClientMetricFrame`, `Tray`, `GlobalEventHandlers`, `SearchFilter`, `SigninOrSignupResponse`, `IConfigurationSnippet`, `IndexMap`, `ExternalLoginProviderInfoModel`, `DeleteFileSystemCommandInput`, `XIdType`, `NucleusApp`, `NativeImage`, `IVanessaEditor`, `MessageThreadStyles`, `SFARenderLists`, `DisplayListRegisters`, `SelExpr`, `GitDSL`, `MultisigBitcoinPaymentsConfig`, `AtomShellType`, `RoutableTileNode`, `OrbitControls`, `ComponentProps`, `f64`, `AppJob`, `WellState`, `WrapperProps`, `Sketch`, `ApplicationLoadBalancedFargateService`, `Shader3D`, `CustomSetting`, `ONodeSet`, `UpdateQueryBuilder`, `SearchFiltersState`, `ReaderObservableEither`, `UseTimefilterProps`, `TermType`, `DeleteChannelCommandInput`, `DSVEditor.ModelChangedArgs`, `TableName`, `PostgresConnectionOptions`, `Collider`, `NotRuleContext`, `Delaunator`, `Survey.Question`, `NewLineToken`, `Watch`, `LinkLabelVM`, `LoginCommand`, `BlockState`, `MeterChange`, `PathType`, `TokenFilter`, `GalleryApplicationVersion`, `ZRText`, `CompositeOperator`, `UIFileHelper`, `CmsModel`, `PropertyMatcher`, `ICategoryBins`, `CompletionList`, `ISummaryTreeWithStats`, `ResourceHandlerRequest`, `NormalDot`, `PluginEvents`, `ExportType`, `LanguageCode`, `WatcherMap`, `StartFlowCommandInput`, `BodyDatum`, `Deque`, `MonitoringMessage`, `PseudoElementSelector`, `ExecuteShellCommandFunction`, `AutoTranslateSummaryReport`, `ExportAssignment`, `requests.CreateJobRequest`, `UploadItem`, `ScriptContainer`, `ViewMeta`, `TokenParams`, `SpringValue`, `JPAExtraShapeBlock`, `OpenAPIV3.ParameterObject`, `Storybook`, `INumberDictionary`, `MeasureMethod`, `EnumOption`, `ResetPasswordAccountsValidationResult`, `PermissionsData`, `SwitchNodeParams`, `SentryRequestType`, `browser.runtime.MessageSender`, `CustomToolbarItem`, `Angulartics2Matomo`, `SimpleSavedObject`, `XStyled`, `AutofillScript`, `AttachmentService`, `ColorRegistry`, `Nodes.NameIdentifierNode`, `ICurrentWeather`, `UtxoInfo`, `Setting`, `Style`, `ListBranchesCommandInput`, `ICountryGroup`, `Range3dProps`, `ConstructorType`, `AccountProps`, `Adapt.AdaptElement`, `ElementGeometryCacheOperationRequestProps`, `DeleteUserResponse`, `FastRTCPeer`, `DaoTokenWrapper`, `TreeSet`, `Common.ISuite`, `EmployeeStatisticsService`, `AcceptedNameType`, `nodes.Declaration`, `PropertyMetadata`, `ReacordTester`, `ParsedDid`, `PublishJob`, `ConfigBuilder`, `AnyNgElement`, `CreateReactClientOptions`, `OrderByClause`, `IRuleOption`, `internalGauge`, `ParamNameContext`, `EntityMapperService`, `CS`, `purgeCommandCriteria`, `APProcessorOptions`, `SExpressionTemplateFn`, `TrackingOptions`, `LoopMode`, `Nothing`, `Scroller`, `StyledVNode`, `ShardFailure`, `IChunkOffsetBox`, `IVisitor`, `DrawCommand`, `DataConnection`, `StatusCode`, `EncryptionMaterial`, `Breadcrumb`, `StaticProvider`, `UseLazyQueryOptions`, `UnaryOperator`, `CreatedOrder`, `_Record`, `PadplusMessagePayload`, `OnItemExecutedFunc`, `AddRepositoryPayload`, `messages.TestStepResult`, `ServiceEndpointPolicy`, `IAureliaProject`, `tr.events.Name`, `IGenericTag`, `requests.ListSourcesRequest`, `RawSavedDashboardPanel730ToLatest`, `StepSelection`, `CLM.ActionBase`, `OnConflictUpdateBuilder`, `WorkspaceMap`, `IEmployeeAppointmentCreateInput`, `ActionSheet`, `VisualizeUrlGeneratorState`, `RequestSelectorState`, `IDebugger`, `GetObjectRequest`, `TriggerData`, `IScriptCode`, `PackageTypeReport`, `GraphExportedPort`, `FluentIterable`, `WritableDraft`, `Animated.EndCallback`, `Article`, `CompatConfig`, `NameSpaceInterface.Interface`, `IProviderInfo`, `FIRDocumentReference`, `RedBlackTreeNode`, `GetByEmailAccountsRequestMessage`, `Datasources`, `ArenaSceneExtraProps`, `S3URI`, `i18n.Message`, `GUID`, `ProtectionRule`, `CopyTask`, `UpdateProjectCommand`, `MDCTextField`, `ChainIndexingAPI`, `SerializationOption`, `HsLayerUtilsService`, `StringEncoding`, `TypeVariable`, `TelegrafContext`, `ThreeSceneService`, `express.NextFunction`, `StrokeOptions`, `mm.INativeTagDict`, `OptionConfig`, `MapLike`, `SConnectableElement`, `QueryEnum`, `IReaderRootState`, `BaseResource`, `HttpCall`, `GameType`, `PaymentIntent`, `ListGrantsRequest`, `UseCaseLike`, `ComponentRuntimeMetaCompact`, `FactPath`, `MenuEvents`, `MerchantService`, `AttachmentResponse`, `SpotifyService`, `solG1`, `LiveListItem`, `AnnotationActionTypes`, `CancellationErrorCode`, `requests.ListOAuthClientCredentialsRequest`, `ServiceAccount`, `EmbeddableStateWithType`, `IdentityArgs`, `ProductVariant`, `CustomBlock`, `NotWrappable`, `MDCAlertControllerImpl`, `Nes`, `AuthorizedRequest`, `Resetter`, `TicketMod`, `BindingWhenOnSyntax`, `NativePlatformDefinition`, `ChatEvent`, `IConnectedNodes`, `AuthClientInterface`, `DebugProtocol.Variable`, `ConvertFn`, `ASSET_CHAIN`, `VisParams`, `IShaderMaterialOptions`, `VuexModuleConstructor`, `ILanguage`, `ISampleToChunkBox`, `ScrollByY`, `Bluebird`, `Enable`, `NSIndexPath`, `EntryNode`, `SignatureHelpItems`, `IRenderParameters`, `CONNECTION_STATUS`, `NonNullableSize`, `FocusedCellCoordinates`, `Challenge`, `MarkdownTableRow`, `SeriesComposition`, `CourseUser`, `EuiTheme`, `IncomingRequest`, `CanvasType`, `OasParameter`, `ValueAttributeObserver`, `NextPageWithLayout`, `UpdateProjectCommandOutput`, `Metrics`, `EntityResolver`, `IPackageDescriptor`, `ToggleableActionParams`, `ITiledObject`, `nsIFile`, `SParentElement`, `WeapResource`, `WidgetFactory`, `AgAxisLabelFormatterParams`, `SettingsRow`, `Caller`, `UpdatedLazyBuildCtx`, `Events.postframe`, `DataModel.Metadata`, `MODEL`, `EventEmitter2`, `ITemplates`, `SqlOutputContentProvider`, `SocketIOGraphQLClient`, `SignedMessageWithOnePassphrase`, `ITkeyError`, `SessionToken`, `SemanticTokensLegend`, `tabItem`, `row`, `TextStyleProps`, `WebSocket.CloseEvent`, `IterationState`, `SpannedString`, `IBoxPlotData`, `BlockWithChildren`, `BuiltIns`, `BreakOrContinueStatement`, `WrapEnum`, `VBox`, `FilesystemDirectoryNode`, `RowParser`, `MoonBoard`, `PeerContext`, `D3Interpolator`, `NavParams`, `UIMillStorage`, `PBBox`, `ValidationConstraints`, `FcModel`, `GraphQLService`, `MkReplaceFuncStore`, `AcceptResult`, `Tilemap`, `IEventHandlerData`, `LogGroup`, `IFabricWallet`, `https.Server`, `PartitionKeyParams`, `PartialCanvasThemePalette`, `Hotspot`, `IncrementalParser.SyntaxCursor`, `FlattenLayerArgs`, `Calendar_Contracts.CalendarEvent`, `AwilixContainer`, `ClassWeightMap`, `Date`, `DecimalAdjustOptions`, `EdgeLabels`, `Insertion`, `JSDocSignature`, `ContentDescriptorRequestOptions`, `PluginApi`, `AppInstance`, `KeyData`, `KvMap`, `ResourceItemXML`, `Deletion`, `DispatchPropsOfControl`, `SizeObject`, `LocalizableString`, `Z`, `TreeViewItem`, `PDFName`, `TAttrs`, `Ret`, `QueryPaymentsRequest`, `DateTimeService`, `CallbackHandler`, `PvsioEvaluatorCommand`, `FormlyFieldConfig`, `ISettingsIndexer`, `Consola`, `SearchFilterConfig`, `IRenderService`, `MessageTypes`, `GraphQLHOC`, `t.Context`, `DocTableLegacyProps`, `SavedObjectsImportRetry`, `IReducerMap`, `IStructuredLicense`, `DeleteProfile`, `Counter2`, `ISqlCommandParameters`, `CallableConfig`, `StoriesDefaultExport`, `ThemePalette`, `DeleteStreamCommandInput`, `SecretData`, `DropdownService`, `TypeCondition`, `ITodosState`, `DeliveryTarget`, `InsightModel`, `IReduxStore`, `RefCallback`, `MagnetarInstance`, `PermissionsResource`, `TsOptionEngineContext`, `IModelTransformer`, `Rect`, `GetBotChannelAssociationsCommandInput`, `RulesByType`, `PopoverContextValue`, `GfxVertexAttributeDescriptor`, `RequiredStringSchema`, `RTMClient`, `ValidatorOptions`, `CreateParameterGroupCommandInput`, `ArrowCallableParameter`, `Range3d`, `ts.TypeAssertion`, `DetectionResultRowIndicatorColumn`, `ModalProps`, `ResourceDefinition`, `BinarySearchTreeNode`, `DateWrapperFormatOptions`, `Clients`, `SpotSession`, `ModuleConfig`, `CustomCameraControls`, `QueryOptions`, `Move`, `SpawnOptions`, `AllureConfig`, `PannerNode`, `OpenSearchDashboardsDatatableRow`, `UIBezierPath`, `ApiNotificationSender`, `TypedUseSelectorHook`, `ManagementApp`, `MenuListProps`, `ContentSource`, `PeerCertificate`, `ImageGLRenderer`, `SortParam`, `DescribeReservedInstancesCommandInput`, `CredentialManager`, `UpdateWebhookCommandInput`, `Pocket`, `AttributeMap`, `MagickInputFile`, `Dialogue.Config`, `Highcharts.JSONType`, `ProjectTemplate`, `ListColumnSetting`, `MeetingHistoryState`, `SliderCheckPoint`, `ControllerProps`, `CrowbarFont`, `StudioVersion`, `LogInRequest`, `IBasicProtocolMessage`, `IGetExpenseInput`, `NoncondexpressionContext`, `RecordFormat`, `IProtonAccount`, `ExpressionFunctionOpenSearchDashboardsContext`, `ParseTreePattern`, `NgElementConstructor`, `MatSliderChange`, `RemoteSourceProvider`, `Insert`, `CreateConnectionRequest`, `ProjectActions`, `RandomSource`, `ITimeLog`, `PropName`, `Validation`, `ActionParamException`, `CoreTypes.PercentLengthType`, `UrlWithParsedQuery`, `HTMLScLoadingSpinnerElement`, `IBase`, `TermRows`, `ToneAudioNode`, `BlobInfo`, `SubMenuProps`, `AnimatableElement`, `DialogProps`, `requests.ListAppCatalogListingsRequest`, `MockReaction`, `CachedUpdate`, `GfxSamplerDescriptor`, `NeovimClient`, `AccessorDeclaration`, `RBNFSet`, `IContextView`, `IndicatorAggregateArithmetic`, `ReflectiveInjector`, `ProxyInstance`, `FireCMSContext`, `IEntries`, `IndexedColumn`, `ClientOptions`, `CloudServiceResponse`, `FasterqQueueModel`, `PrEntity`, `ModelTemplate`, `UpworkService`, `ServiceExitStatus`, `d.ComponentCompilerData`, `SendView`, `SidePanelRanking`, `Mutex`, `PropsWithUse`, `d.InMemoryFileSystem`, `ts.server.Project`, `PgNotifyContext`, `AddonActions`, `AAAARecord`, `EmitAndSemanticDiagnosticsBuilderProgram`, `flatbuffers.ByteBuffer`, `HTMLMediaElement`, `StripeAddress`, `PromisedAnswer`, `XYZProps`, `DiContainer`, `VectorView`, `PropertyTreeNodeHTMLElement`, `vscode.TextEdit`, `CommentData`, `K8sManagement`, `NETWORK_NAME`, `PermutationSegment`, `TSender`, `GenericDefaultSecond`, `types.TextDocumentIdentifier`, `HelpList`, `MDBModalRef`, `NumericB`, `PatternAsNode`, `tracing.ReadableSpan`, `RawShaderMaterialParameters`, `Classifier`, `MockedResponseData`, `AuthController`, `FIRAuthDataResult`, `EngineArgs.PlanMigrationInput`, `MagentoAggregation`, `TestExporter`, `AcronymStyleOptions`, `BackgroundState`, `QueuedEventGroup`, `FlushConfig`, `TavernsI18nType`, `CreateError`, `OrientedBox3`, `MethodArgsRegistry`, `ConfigValues`, `NodeEvent`, `BucketAggParam`, `CollisionSolver`, `DimensionDetails`, `NextCurrentlyOpened`, `FtrProviderContext`, `Netlify`, `CreateInstanceProfileCommandInput`, `CompiledExecutable`, `TaskArguments`, `FilterMode`, `CredentialResponseCoordinator`, `requests.ListFastConnectProviderVirtualCircuitBandwidthShapesRequest`, `GraphinProps`, `TargetTrackingConfiguration`, `PackageData`, `DaffLoginInfo`, `SwingRopePoint`, `SFAMaterial`, `G1`, `NodeStatus`, `FeaturePipelineState`, `TransferItemFlatNode`, `ReferencedSymbolDefinitionInfo`, `RpcConnection`, `BigBitFieldResolvable`, `IServiceProvider`, `EventActionHandler`, `ListBase`, `HashTag`, `esbuild.OnResolveResult`, `VideoConverterFactory`, `CreateServiceCommandInput`, `BrandC`, `Order`, `CRDTArray`, `MicrosoftSynapseWorkspacesSqlPoolsResources`, `TeleportService`, `MockBroadcastService`, `DirectiveDefinition`, `IServer`, `RMCommandInfo`, `IMidwayApplication`, `ServiceTreeItem`, `PanResponderInstance`, `yubo.MainReducer`, `RecentlyClosedEditor`, `BasePlugin`, `ScriptableContext`, `DomainCategory`, `ControlFlowEnd`, `SVAddr`, `TimelineNonEcsData`, `DocReference`, `NormalizedNodeType`, `ICandidateFeedback`, `SyncDB`, `MockStoreEnhanced`, `CollectionFactory`, `PipelineValue`, `AssertLocationV2`, `TFLiteModel`, `ProjectInformationStub`, `ManifestInstance`, `AdBreak`, `ToolManagerService`, `UtilitiesService`, `WebhookRequest`, `TypeAliasDeclarationStructure`, `DetectedCompiler`, `TsChart`, `RBNFDecimalFormatter`, `TileObject`, `LogService`, `ChannelStoreEntry`, `Level2`, `CommitDetails`, `IDeferredPromise`, `PresentationPreview`, `ProjectFn2`, `TFS_Core_Contracts.TeamContext`, `TelemetryPluginStart`, `QueryHistoryNode`, `ColumnApi`, `AttestationModel`, `Unwatch`, `ObservableEither`, `EventData`, `ValueOf`, `BatteryCCReport`, `UrlPattern`, `KnownMediaType`, `FailedRequestType`, `ComboEventPayload`, `BuildHandlerArguments`, `Should`, `ThingType`, `CodeSnippet`, `ILeaguePrices`, `PlatformPath`, `IPipeline`, `messages.Step`, `RedisService`, `KonstColor`, `RouteQuoteTradeContext`, `SecretsManager`, `DragRefConfig`, `VideoFormat`, `IterableFactory`, `HandlerFn`, `ImageAndTrailImage`, `ProviderObservedParams`, `AnyWire`, `ExtractorMessage`, `ScaleOptions`, `FindRoute`, `MinecraftFolder`, `LchaColor`, `IViewPortItem`, `DeleteProfileCommandInput`, `MDCActivityIndicator`, `IFibraNgRedux`, `TransactionFactory`, `paper.PathItem`, `ShadowCastingLight`, `FieldValue`, `DotenvParseOutput`, `JSONRPCResponse`, `WheelDeltaMode`, `ex.PostDrawEvent`, `IInternalParticipant`, `CDP.Client`, `FlatTree`, `ConfigValueChangeAction`, `SAO`, `Card`, `DealService`, `$N.IBaseNode`, `CellEditor.CellConfig`, `Foam`, `ITrackDescription`, `types.AzExtLocation`, `CpuState`, `ConcreteLaunchOptions`, `Oid`, `MatchOptions`, `AttachedPipettesByMount`, `BabelDescriptor`, `DeleteResourcePolicyCommandOutput`, `ListComprehensionNode`, `ng.IRootScopeService`, `IMapSourceProvidersConfig`, `ResolveModuleIdResults`, `IBifrostAccount`, `StringToNumberSyntax`, `TextAlignment`, `Dialogic.IdentityOptions`, `PartialCliOptions`, `ModelNode`, `ArianeeTokenId`, `MapSimulation3D`, `ActionButton`, `ConfigActionTypes`, `SetStatus`, `UserExtendedInfo`, `TestPhysicalObject`, `TestAssertionStatus`, `RopInfo`, `MeshPhongMaterial`, `capnp.Pointer`, `HdErc20PaymentsConfig`, `SendCommandCommandInput`, `FeatureFilter`, `ITemplateId`, `FinalTask`, `ActionHandler`, `TestDataService`, `RestoreResults`, `EqualityDeciderInput`, `Phaser.GameObjects.GameObject`, `DocumentDataExt`, `NavProps`, `XPCOM.nsISupports`, `ITenant`, `CancelJobCommandInput`, `CreateDatabaseResponse`, `DnsResponse`, `RuntimeEngine`, `Overrides`, `TSelector`, `FabricEnvironment`, `StatefulSet`, `QueryKeySelector`, `Searcher`, `SimpleRenderer`, `TSpy`, `IAccessor`, `BarcodeFormat`, `GL`, `ReaderObservable`, `TestMarker`, `MonacoEditorModel`, `IFlexProps`, `IIdentity`, `ColorRGBA`, `ILayoutContextProps`, `CreateWorkflowCommandInput`, `ClassDeclaration`, `Asm`, `ListVaultReplicasRequest`, `KEXFailType`, `TrackedImportSymbol`, `MockValidatorsContract`, `TemplateEngine`, `RdsMetricChange`, `ChangeSetItem`, `PluginObj`, `Chai.AssertionStatic`, `ESCalendarInterval`, `ScreenType`, `ByteStr`, `Cue`, `StatsChunk`, `OptimizeCssOutput`, `CallbackMethod`, `CreateRegionPureReturnValue`, `requests.ListInstanceDevicesRequest`, `NeedleResponse`, `CreateVpcLinkCommandInput`, `TraceData`, `DatasourceSuggestion`, `IOptimized`, `DeepReadonlyObject`, `RequestInterface`, `SessionRefreshRequest_VarsEntry`, `ast.LookupNode`, `Konva.Stage`, `Liquidator`, `InMemoryPubSub`, `Compressors`, `PiLogger`, `TSClient`, `ContextualIdentity`, `ObjectCacheEntry`, `EventFieldInfo`, `ast.RunNode`, `QueryEngineRequestHeaders`, `NotificationHandler0`, `DateFormatOptions`, `StructureTower`, `EggAppInfo`, `CustomDocumentStoreEntry`, `MenuSurface`, `FileSystemEntry`, `ex.Engine`, `MDCNotchedOutlineAdapter`, `AbstractLogger`, `ThyDragDropEvent`, `MidiNote`, `ScenarioService`, `TaroText`, `IconsName`, `ListStacksRequest`, `PickerController`, `App.windows.IWindowModuleMap`, `CohortState`, `Cypress.PluginConfig`, `SubtitlesFileWithTrack`, `RoarrGlobalState`, `CalculatedBlock`, `TProvider`, `CreatePortalCommandInput`, `typeOfRow`, `PutEmailIdentityFeedbackAttributesCommandInput`, `ArtColumn`, `SubtitlesTrack`, `MdcChipAction`, `AsyncCPUBackend`, `ProviderMessage`, `Ver`, `deployData`, `IOneArgFunction`, `ApolloClient`, `LocalizedCountry`, `InternalTransition`, `FontProps`, `ILiteral`, `IVector2`, `DirectiveNode`, `SearchContext`, `d.LoggerTimeSpan`, `HsStylerService`, `OpPathTree`, `Phrase`, `RSPOutput`, `IValue`, `DeleteRouteCommandInput`, `EventName`, `vscode.DiagnosticCollection`, `ScreenshotCache`, `Expect`, `RequestPresigningArguments`, `MotionState`, `DetectEntitiesCommandInput`, `FirebaseAuth`, `MindNodeModel`, `SpaceStyleProps`, `DescribeTaskCommandInput`, `RegexDialect`, `PartialEntityCollection`, `ProviderWithScope`, `IHttpRequestOptions`, `CommonWrapper`, `MultipleDeclaration`, `EntityBuilder`, `ScanSegment`, `Skeleton`, `SignerPayloadJSON`, `BezierCurveBase`, `GenericDraweeHierarchyBuilder`, `ResourcePack`, `BridgeDeploy`, `Runner.Utils`, `CommandCreator`, `GetOperationRequest`, `CreateProcessOption`, `DashboardListingPage`, `AzureCustomVisionProvider`, `vscode.Range`, `DaffCartShippingRateFactory`, `Composite`, `AllowedModifyField`, `NestFastifyApplication`, `WFSerialization`, `EmbeddedOptions`, `TestReadable`, `Distortion`, `StatsGetterConfig`, `AppIdentity`, `ts.ArrayLiteralExpression`, `SkygearError`, `ActivityStreamsModel`, `PanGestureHandlerStateChangeEvent`, `RewriteRequestCase`, `ExpressRouteCrossConnection`, `EmbeddingLayerArgs`, `PrismaClientInitializationError`, `AppController`, `AccountType`, `PostCollector`, `SEOProps`, `MSIVmTokenCredentials`, `IItemRenderData`, `NexusInterfaceTypeDef`, `CmsEditorContentModel`, `Rx.Notification`, `DailyApiRequest`, `Palette`, `Texlist`, `EffectOptions`, `ILayout`, `PluginHooks`, `Raffle`, `CustomDomain`, `ViewerModel`, `ARAddOptions`, `ReplayEntity`, `DashboardType`, `ng.IHttpPromiseCallbackArg`, `B1`, `LabelAccessor`, `HitTestResult`, `GetApplicationResponse`, `HoldingUpdatedArg`, `OtCommand`, `ErrorLocation`, `ast.NodeList`, `ScalarActivity`, `HandlerStateChangeEvent`, `IndexedGeometry`, `SequenceKey`, `ConditionOperatorName`, `PBXFile`, `DefaultRootState`, `ProxyObject`, `NamespaceExportDeclaration`, `CBlock`, `d.RenderNode`, `TabNavigationState`, `TemplateConfig`, `GaxiosPromise`, `AxeResultConverterOptions`, `CollateContext`, `UserScriptGenerator`, `Listing_2`, `OutputConfig`, `ChatBoxStateModel`, `JobType`, `OrderStatus`, `GenericAPIResponse`, `HotkeysService`, `Dexie`, `FieldPlugin`, `http.ServerRequest`, `ExtensionService`, `HSD_TObj_Instance`, `DestinationSearchResult`, `InsightsResult`, `RequestService`, `HistoryService`, `HttpRequester`, `DescribeConnectorProfilesCommandInput`, `Session.ISession`, `TallyType`, `G2TimelineData`, `BackgroundTrack`, `Neuron`, `AsyncCommandResult`, `InputCurrencyOutput`, `ApolloPersistOptions`, `HTMLDocument`, `CreateLoadBalancerCommandInput`, `IntervalContext`, `PublishArgs`, `AxisAlignedBox3d`, `ChatThreadClient`, `XMLElementUtil`, `UpdateTemplateCommandInput`, `IJobPreset`, `JSONSchema4`, `DirectoryNode`, `LobbyHouse`, `TrackedStorage`, `PeriodModel`, `PieLayerState`, `superagent.Response`, `Force`, `RgbVisConfig`, `MapSet`, `SearchParams`, `T0`, `StreamWithSend`, `SettingValue`, `ListRenderItemInfo`, `PlansCategories`, `GrammarToken`, `FetchService`, `DeviceState`, `ReactQueryConfig`, `ProxyRequest`, `DeleteTokenCommandInput`, `MatFormField`, `PathProps`, `CombatEncounter`, `ParseTreeResult`, `WebSqlTx`, `SubmissionDetailEntity`, `PopoverOptions`, `BorderRadius`, `DockerApi`, `ComponentDoc`, `UntagResourceCommand`, `TreeNodeItem`, `ConcurrentWorkerSet`, `ICreateResult`, `BinOp`, `CPS`, `XmlMapsXmlNameCommandInput`, `VRMHumanoid`, `VNode`, `ResolvedFunctionType`, `FastFormFieldComponent`, `XSDXMLNode`, `DeclarationReference`, `HttpResponse`, `OutputFlags`, `FunctionServices`, `EventsFnOptions`, `apid.GetRuleOption`, `BuildLevel`, `ContextState`, `IOpenAPI`, `cytoscape.SingularElementArgument`, `ProductProps`, `Import.Options`, `GanttViewOptions`, `ExampleFlatNode`, `GitHubActions`, `ReleaseTag`, `GuildConfig`, `AnkiConnectRequest`, `Vocabulary`, `B15`, `CommandParams`, `ListApplicationsCommand`, `LocalWallet`, `InterfaceWithDictionary`, `DegreeType`, `StoreSetter`, `GestureResponderEvent`, `EbsBlockDevice`, `ScoreStrategy`, `OptionEquipped`, `WaveShaperNode`, `XRWebGLLayer`, `ImportDeclaration`, `ShellExecResult`, `PermissionLevel`, `CaseOrDefaultClause`, `AsBodiless`, `Webhook`, `StateDictionary`, `SequenceComponent`, `MyModule`, `SettingsFile`, `AdminState`, `VNodeArrayChildren`, `MultiSigHashMode`, `IFilter`, `MessageModel`, `OpenAPI.Schema`, `ClassList`, `NamespaceOperatorDecl`, `WechatQRCodeEntity`, `SVBool`, `WebsocketInsider`, `LiveEventMessage`, `ProseNodeType`, `BoundEventAst`, `SendPropDefinition`, `NodeConfig`, `UseMap`, `CustomVariant`, `Discord.Guild`, `LogAnalyticsSourceMetric`, `LocationAccessor`, `TypeScriptSubstitutionFlags`, `HalLink`, `DeviceSummary`, `FunctionToActionsMap`, `ListTemplatesCommandInput`, `PiElementReference`, `IRequestResponse`, `TransitionFn`, `WebpackPluginInstance`, `CacheContainer`, `AggregatedApiCall`, `WebsocketClient`, `SwimlaneActionConnector`, `ParamMetadata`, `CylinderGeometry`, `ScanResultResponse`, `ReviewerReadModel`, `ResetAction`, `CourseService`, `providers.Log`, `RGroup`, `WaitForYellowSourceState`, `MutationHookOptions`, `FilterOf`, `evaluate.Options`, `TagEventType`, `AmdModule`, `LightArea`, `StartExperimentCommandInput`, `MouseButton`, `SubscriptionHolder`, `Tensor3D`, `OpenApiParameter`, `EmotionCache`, `MapSubLayerProps`, `IVimStyle`, `utils.RepositoryManager`, `EdiDocumentConfiguration`, `StripeElements`, `interfaces.Lookup`, `SignedDebtOrder`, `TrialType`, `AccountState`, `ValidatorFunction`, `FunctionData`, `MagickFile`, `SGraph`, `IDateColumn`, `StatusEntry`, `Passenger`, `AudioItem`, `tBootstrapFn`, `LiteralObject`, `code.TextDocument`, `DialogForm`, `InitiateOptions`, `WetLanguage`, `BaseWeb3Client`, `QuerySnapshot`, `SchemaModel`, `Continuation`, `DeploymentExecutor`, `UiRequest`, `d.OptimizeCssOutput`, `ExpressionRendererRegistry`, `NodeCryptoCreateDecipher`, `PivotGroupByConfig`, `OptionsProps`, `GradientStop`, `PortalOutlet`, `Path7`, `XHRoptions`, `TYPE`, `KnownTokenMap`, `SpinnerService`, `CalendarViewEventTemporaryEvent`, `Puzzle`, `SecureStore`, `ImportedNamespace`, `LongNum`, `Score`, `IExecutionContext`, `ButtonState`, `Service`, `BrowserEvent`, `IMappingFieldInfo`, `HelpCenterAuthorService`, `RecordList`, `Konva.Shape`, `WsPresentationService`, `RegexComponent`, `PdfObjectConverter`, `Prompter`, `NavigationBarItem`, `PromptItemViewModel`, `Loc`, `ExpressionsSetup`, `CreateDataSourceCommandInput`, `ListNodePoolsRequest`, `PolicyRates`, `LunarYear`, `FieldDeclaration`, `VideoFile`, `IModulePatcher`, `MockERC20TokenContract`, `LogFn`, `SwitchCallback`, `JsonRpcHandlerFunc`, `TexCoord`, `TemplatingEngine`, `BadgeProps`, `ISiteDesign`, `GeoProjection`, `ThyPopoverRef`, `TrustToken`, `TokensPrices`, `JSX.Element`, `ParameterComputeType`, `StateChannel`, `SpecFun`, `ParseSourceSpan`, `Runner`, `ExpandedAnimator`, `GetChannelMessageCommandInput`, `SharedContents`, `PlanetaryTrack`, `GetBucketLifecycleConfigurationCommandInput`, `FindProjectsDto`, `IntLiteralNode`, `VectorKeyframeTrack`, `KeyProvider`, `IPathMapping`, `ContainersModel`, `OverridableComponent`, `CompoundMeasurement`, `IInternalEvent`, `ScoreHeader`, `GameConfig`, `shell.Shell`, `BabelPlainChain`, `TimestampShape`, `ITenantManager`, `INativeTagDict`, `Deno.ListenOptions`, `Proxy`, `PropConfigCollection`, `d.FsReaddirItem`, `WebView`, `CasePostRequest`, `TestFunctionImportSharedEntityReturnTypeParameters`, `ChatResponse`, `HeaderViewProps`, `AllocationItem`, `ProtocolConformance`, `Int16`, `InspectFormat`, `Viewpoint`, `VoteChoices`, `ITypedResponse`, `thrift.IStructCodec`, `BasePoint`, `ReleaseGoldConfig`, `OnGestureEvent`, `Is`, `ExternalWriter`, `ChainNodeFactory`, `SwappedToken`, `OpenApiDocument`, `NotAuthorizedException`, `THREE.BufferGeometry`, `PubsubMessage`, `Calendar_Contracts.IEventQuery`, `ABLParameter`, `NotifyQueueState`, `TypeAllocator`, `CLIArgumentType`, `CausalRepoBranchSettings`, `ThemableDecorationRenderOptions`, `IUnitModel`, `EvaluatedExprNode`, `ISessionBoundContext`, `IGraph`, `ImmutableObjective`, `ListProjectsRequest`, `FieldModel`, `PutEmailIdentityMailFromAttributesCommandInput`, `Point3F`, `T6`, `GfxVertexBufferDescriptor`, `ZWaveFeature`, `ITrackStateTree`, `ToolbarIconButtonProps`, `EdiSegment`, `TranslationAction`, `ParameterInvalidReason`, `SnapshotOptions`, `BalmEntry`, `FloatOptions`, `IAttachment`, `ExtensionPackage`, `AgreementData`, `DocumentModel`, `WearOsListView`, `VisibilityState`, `MigrateAction`, `MockRepository`, `ContractDBTransaction`, `ListNode`, `GlobalSettings`, `ToolbarTest`, `IMdcSegmentedButtonSegmentElement`, `TickAutomationEvent`, `HostLabelInput`, `DaemonConfig`, `TaskUser`, `ExcaliburGraphicsContextOptions`, `TypedKeyInfo`, `EmailTemplateService`, `NumberType`, `HTMLIFrameElement`, `DaffProduct`, `RouterStateSnapshot`, `Pluggable`, `ListOperationsCommandInput`, `AvailabilityStatus`, `MiddlewareMetadata`, `IRequestContext`, `ReindexService`, `AppEvent`, `ReadLine`, `TextureInputGX`, `SharedElementNode`, `X12Segment`, `WithdrawalMonitorObject`, `TransactionResponse`, `IPrimaryKey`, `ISparqlBindingResult`, `EventbusService`, `HTMLCmpLabelElement`, `schema.Document`, `PrivateThreadAndExtras`, `SubCommand`, `TNSCanvasRenderingContext`, `TransactionUnsigned`, `RealtimeEditMode`, `ImportParts`, `MeetingAdapterStateChangedHandler`, `ProxyServer`, `sast.Node`, `UrlService`, `Gradient`, `CacheData`, `xml.Position`, `ExecEnv`, `AnomalyRecordDoc`, `AggTypesDependencies`, `MirrorDocumentSnapshot`, `StateSnapshot`, `TopLevelDeclarationStatement`, `IRequestOption`, `AaiChannelItem`, `PartiallyEmittedExpression`, `FSService`, `PromiEvent`, `FontFace`, `FieldDefn`, `Graphics.Texture`, `SourceRule`, `TypescriptAst`, `CurrencyValue`, `IDraggableData`, `DDiscord`, `IFunctionCallArgument`, `ec`, `ListChannelMessagesRequest`, `FileSystemReader`, `ColumnInfo`, `long`, `PopulatedTagDoc`, `Entity.Account`, `PartialErrorContinuation`, `BasicCCReport`, `OverflowModel`, `ListManagementAgentInstallKeysRequest`, `AmmContractWrapper`, `SelectMenuInteraction`, `Ycm`, `ProfileService`, `GetCoordinate`, `DoubleMapCallback`, `JsonaValue`, `ConditionalBooleanValue`, `PropertyOptions`, `EventProvider`, `SxToken`, `TranslateContainerConfig`, `LogConfiguration`, `ValueMetadataNumeric`, `DescribeEndpointsResponse`, `ts.ForOfStatement`, `ObjectStorageSourceDetails`, `UAParserInstance`, `TemplateAnalyzer`, `CreateClusterCommandOutput`, `NgForageConfig`, `ServiceRecognizerBase`, `StagePanelManager`, `OptionNode`, `BScrollOptions`, `QuerySettings`, `IExpressionLoaderParams`, `InternalComputedContext`, `RegistrationDTO`, `EventmitHandler`, `DB`, `Precondition`, `AppPage`, `CCValueOptions`, `WriteFileOptions`, `Rand`, `IAvatarBuilder`, `ProtocolConformanceMap`, `DocsService`, `CaseExpr`, `Indent`, `ResolvedFile`, `CallSignatureInfo`, `AppEvent.Stream`, `FrameworkOptions`, `EmitFileNames`, `SavedObjectsClientWrapperFactory`, `LabelNode`, `DataLakePrincipal`, `WorkerServiceProtocol.RequestMessage`, `FnN`, `WrappedAnalyticsEvent`, `ContentShareObserver`, `MixedIdType`, `NodeToVisit`, `CustomerDTO`, `ControlProps`, `SyncResult`, `DeleteTagsCommand`, `ExecutorOptions`, `TSParseResult`, `ResourceInUseException`, `PartialBotsState`, `RootComponentRegistry`, `Canvg`, `EditorRange`, `LowAndHighXY`, `Greeter`, `RedHeaderField`, `FormField`, `ICalAttendee`, `CdkTreeNodeDef`, `ClassVarInfo`, `ProofBranch`, `ProjectionOptions`, `TearrData`, `TTurnAction`, `Postprocessor`, `CSSSource`, `ProgressData`, `DataWithPosition`, `MetaSchema`, `monaco.editor.IEditorMouseEvent`, `JSX.TargetedEvent`, `PragmaValueContext`, `SimpleNotification`, `UserContextType`, `Rect2D`, `ICourseModel`, `FormContext`, `PositionPlacement`, `OmvFeatureModifier`, `MatchRule`, `DefaultEditorAggParamProps`, `TileBoundingBox`, `MemberAccessNode`, `IDType`, `NavigationPublicPluginSetup`, `AuthStrategy`, `GenericAnalyzer.Dictionary`, `CheckIdTaskDto`, `WhitePage`, `FirenvimElement`, `MutationEvent`, `JsonValue`, `ConfigurableConstraint`, `NumericalRange0`, `EquipmentDelay`, `HashedItemStore`, `d.BuildConditionals`, `DidChangeConfigurationParams`, `ListApplicationVersionsCommandInput`, `MarkdownTheme`, `ServerCapabilities`, `SimpleMap`, `FileOpenFlags`, `ApplicationInfo`, `Highcharts.AnnotationPointType`, `PaymentResource`, `ProcessHandler`, `UpdateImportInfo`, `BlinkerDevice`, `X12Parser`, `BannerState`, `CreateViewNode`, `IClassParts`, `NormalizedComponentOptions`, `Timeout`, `CalibrationState`, `ExistsExpression`, `ThyNavLinkDirective`, `ListSchemasResponse`, `DOMStringList`, `GithubRelease`, `AugmentedProvider`, `D2rStash`, `CellStyle`, `ResourceManagementClient`, `ExtractGetters`, `CompletionResults`, `TabName`, `ShallowMerge`, `CreateTargetResponderRecipeDetails`, `FeltReport`, `ClientReadableStream`, `NormalizedFilter`, `ServerDto`, `AddApplicationCloudWatchLoggingOptionCommandInput`, `def.View`, `ControlComponentProps`, `TypedFormGroup`, `BitbucketUserEntity`, `requests.ListAlertRulesRequest`, `ILinkedListNode`, `ClassEntry`, `CardInterface`, `LooseObject`, `IdentityMetadataWrapper`, `GfxQueryPoolType`, `IMyFavouriteItem`, `GfxSwapChain`, `ControllerInterface`, `IXElementResult`, `PermutationListEntryWithTrackingData`, `AST.MustacheStatement`, `BucketAggTypeConfig`, `PathSegment`, `ExchangePositionInput`, `TimeoutTask`, `DropedProps`, `LayerInfo`, `ProtocolRequestType0`, `DynamicEntityService`, `App`, `DAVCalendar`, `ObservableSet`, `CompletionParams`, `TLabelName`, `CreateSchemaCommandInput`, `MatchmakerAdd`, `PiBinaryExpression`, `PassphraseError`, `PriceLineOptions`, `MetadataField`, `IOrganization`, `ts.LiteralType`, `CompactOrCondition`, `HubLinksWebPart`, `Q.Promise`, `ChannelAnnouncementMessage`, `ResultValue`, `PolynomialID`, `PluginsContainer`, `LoggingEvent`, `ScriptKind`, `FetcherOptions`, `Submesh`, `UnitConversionSpec`, `OnChangeType`, `TypedQuery`, `ThresholdedReLULayerArgs`, `RequestMessage`, `ActivityPropertyDescriptor`, `CfnRole`, `UserFunctionSignature`, `A3`, `TActorParent`, `PickScaleConfigWithoutType`, `Writer`, `TreeSitterDocument`, `ColorKey`, `Tutorial`, `MeetingSessionStatusCode`, `BrowseEntrySearchOptions`, `NodeCanvasRenderingContext2D`, `Toolkit`, `Multiaddr`, `IndicatorQueryResp`, `AdapterPool`, `SpawnResult`, `CachedValue`, `IPerfMinMax`, `ExecuteCommandParams`, `MintInfo`, `Aai20SchemaDefinition`, `emailAuthentication.Table`, `VocabularyEntryDetail`, `ListSuppressionsRequest`, `SanitizedAlert`, `ScopedObjectContext`, `SessionStore`, `DeleteVpcLinkCommandInput`, `DesktopCommand`, `Popup`, `LocalizedText`, `Paper`, `EndpointInput`, `VirtualCloudNetwork`, `DatabaseCredentials`, `VirtualContestInfo`, `PropertyDescriptor`, `ICodeEditor`, `ShapeAttrs`, `requests.ListPreauthenticatedRequestsRequest`, `InstallState`, `WorkspaceData`, `ConfigParameterFilter`, `messages.Tag`, `angular.IDeferred`, `TableFactory`, `SceneRenderer`, `IDiscordPuppet`, `LocalizeRouterSettings`, `StripeEntry`, `LedgerDigestUploadsName`, `MutableChange`, `MyView`, `SectionType`, `BreakpointKeys`, `CompileKey`, `JSONInput`, `ChromeNavControl`, `ITKeyApi`, `DrawCall`, `VaultData`, `BinarySwitchCCReport`, `UserRole`, `TestCLI`, `JsonRpc`, `NotificationLevel`, `ModeAwareCache`, `WorkspaceChange`, `React.HTMLAttributes`, `BlankLineConfig`, `GQtyConfig`, `PaginationModel`, `vscode.SymbolInformation`, `TodoComment`, `SVNumeric`, `MIRType`, `DurableOrchestrationClient`, `DeleteDedicatedIpPoolCommandInput`, `InstanceState`, `GraphicsComponent`, `CeloContract`, `SetLanguage`, `FieldFormatMap`, `ElementNode`, `Apple2IO`, `IGLTFLoaderData`, `ListUI`, `IpcMain`, `WalletEventType`, `SimpleExprContext`, `CreateProfileCommandInput`, `CreateAccessPointCommandInput`, `AzurePipelinesYaml`, `HistoryManager`, `ExtenderHandler`, `GetPackageVersionHistoryCommandInput`, `PathToProp`, `HandleEvent`, `MdcSnackbarConfig`, `NewPerspective`, `MenuNode`, `Vec2Like`, `SimpleAllocation`, `DaprManager`, `BarChartDataPoint`, `QueryExecutorFn`, `StateFromFunctionReturningPromise`, `ExpandedNodeId`, `ContextMenuFormProps`, `IGlobalEvent`, `tsc.Type`, `IgApiClient`, `PngEmbedder`, `MultiChannelAssociationCCAPI`, `ZeroXOrder`, `Todo_todo`, `CallbackContext`, `Columns`, `Newable`, `Fruit`, `GltfLoadOption`, `ExecException`, `SpatialDropout1D`, `ExcalideckEditorState`, `NormalizedIdentifierDescriptor`, `UseFetchReturn`, `ISubscribable`, `ConfiguredProject`, `SelectedItem`, `ReverseQueryInterface`, `IProcesses`, `Obstacle`, `SourceAwareResolverContext`, `ThisAddon`, `auth.AuthenticationDetailsProvider`, `TSLintAutofixEdit`, `PanelSocket`, `DiagnosticsLogger`, `IPriceAxisView`, `AggsSetup`, `SetterOrUpdater`, `IntPairMap`, `tf.Tensor2D`, `StandartParams`, `Int64`, `IItemAddResult`, `webpack.Stats`, `State.Transaction`, `def.Vec4`, `PutConfigurationSetSendingOptionsCommandInput`, `EmitterConfig`, `Events.postupdate`, `DefsElementMap`, `Shall`, `ConnectionTransport`, `OperatorFunction`, `ModalHelper`, `FocusZone`, `BuilderProgram`, `RenderingContext`, `DraggedItem`, `Notifire`, `YoutubeRawData`, `EntitySprite`, `TestModel`, `ScriptParametersResolver`, `XUL.chromeWindow`, `UpdateManyParams`, `RuleDefinition`, `P2PNodeInfo`, `CreateDomainCommandInput`, `KeyPairTronPayments`, `IDataType`, `StructuredAssignementPrimitive`, `OptionGroups`, `IDocumentOptions`, `FetchError`, `NodeListOf`, `PaletteConfig`, `UgoiraInfo`, `StoreKey`, `MessageSender`, `HttpResponseMessage`, `TextInputProps`, `OcpuUtilizationInfo`, `Collision`, `StoreDefinition`, `TestElement`, `AlterTableExecutor`, `FolderOrNote`, `IOAuth2Options`, `IFontFaceOptions`, `Current`, `IFormPageState`, `BitwiseExpressionViewModel`, `CustomPaletteState`, `WordcloudPoint`, `ColorSet`, `FileRef`, `DeleteClusterCommandOutput`, `RBNFCollector`, `LabeledStatement`, `DeepMapAsyncResult`, `HeaderInfo`, `FeaturesService`, `NodeDefaultCryptographicMaterialsManager`, `GraphDataset`, `IMethod`, `WorkflowClient`, `JsxSelfClosingElement`, `NetworkRecorder`, `Valve`, `SampleCartProduct`, `BinaryReader`, `ActionTree`, `ConverterLogger`, `ListManagementAgentImagesRequest`, `TextEditorPropertiesMain`, `WaitOptions`, `CentralSceneCCConfigurationSet`, `d.OutputTargetDistLazy`, `InvalidFieldError`, `AnimationClip`, `HTTPServer`, `PrRepoIndexStatistics`, `OrgID`, `interfaces.Container`, `IResult`, `ITrackSequence`, `FormatValue`, `EventChannel`, `MessageSeverity`, `VarInfo`, `CryptoFishContract`, `SqrlFeatureSlot`, `GetEnvironentsForProjectEnvironmentResult`, `LayoutResult`, `DueState`, `DryContext`, `ErrorCacheDelta`, `DeploymentStatus`, `ServiceId`, `RTCDtlsTransport`, `Conferences`, `GeometryCommand`, `EncryptedPassphraseObject`, `EventCategoriesMap`, `GraphType`, `WriteTransaction`, `VisTypeIconProps`, `Xml`, `MutationObserverWatcher`, `GenderRepartitionType`, `LFO`, `ClusterVulnerabilityReport`, `SettingsComponent`, `CopyDBClusterSnapshotCommandInput`, `MenuInfo`, `SavedObjectsRawDoc`, `StreamModelWithChannel`, `NodeStore`, `AlterTableNode`, `ISelector`, `A5`, `ISqlEditorResultTab`, `ResponseInterceptor`, `drive_v3.Drive`, `CertificateOptions`, `IActionItemUUID`, `VieraTV`, `RunConfiguration`, `DsDynamicInputModel`, `ISiteDefinitionDocument`, `PutResourcePolicyResponse`, `RobotsTxtOpts`, `FilePreviewDialogConfig`, `IApplicationContext`, `IDateFilter`, `CacheConfig`, `DiscoverIndexPatternProps`, `ListPoliciesResponse`, `CreateEncryptedSavedObjectsMigrationFn`, `IPackageVersionInfo`, `DisplayObjectTransformationProcess`, `NodeController`, `ProductDetailPage`, `Promised`, `SignedMessageWithTwoPassphrases`, `BidirectionalMergeMode`, `JsonLayout`, `AnyJson`, `DescribeEventsResponse`, `TextDocumentItem`, `ClientMenuOrderIdDTO`, `PageListItemProps`, `React.SVGProps`, `RegionGetter`, `IPickState`, `NodeEventHandler`, `CompilerFileWatcherCallback`, `OnFetchEventFn`, `BuildComparator`, `PrismaClientUnknownRequestError`, `ResultView`, `PropertyDocumentationBlock`, `ParamsOf`, `CTX`, `PopoutComponentEvent`, `ITaskRunnerDelegates`, `FleetConfigType`, `GestureConfigReference`, `SpatialControls`, `Loaded`, `Profiles`, `PostCondition`, `CascadeTestResult`, `TransceiverController`, `JsonRpcRequest`, `fc.Arbitrary`, `AffineTransform`, `BaseIO`, `p5.Graphics`, `Hunk`, `SwankRawEvent`, `TransactionAuthField`, `CoreRouteHandlerContext`, `ResizeObserverMock`, `SwitchEventListener`, `AxisComposition`, `SeekQueryResult`, `IStreamPolygon`, `GeometryHandler`, `OptimizerVariable`, `ShaderSemanticsEnum`, `TsunamiContext`, `AccountSteam_VarsEntry`, `PluginActionContext`, `DetectorEnum`, `MetadataRegistryState`, `DataFetcher`, `IUserGroupOptions`, `UndelegateBuilder`, `SubscriberAndCallbacksFor`, `Bettor`, `RadarColumnSeries`, `sinon.SinonStub`, `ListAssociationsCommandInput`, `React.SVGAttributes`, `FargateService`, `Validity`, `InternalStyle`, `Interfaces.RequestConfig`, `AppStateTypes`, `StartDeps`, `RouteWithValidQuote`, `MonitorSummary`, `CausalRepoClient`, `QueryServiceSetupDependencies`, `IdSet`, `ComboType`, `CasparCGSocketResponse`, `ListrObject`, `ProductRepository`, `ListProps`, `Traversable1`, `Hashable`, `VueWrapper`, `ICompiler`, `TimeOpNode`, `PerfEntry`, `SearchRecord`, `VerifyStream`, `SegmentId`, `AppWindow`, `IBranchListItem`, `Mocha.Done`, `SqlEntityManager`, `ReportIndicator`, `ProviderUserBulkRequest`, `Calculator.Client`, `CreateExperimentCommandInput`, `FluidBox`, `RenderContextBase`, `Video`, `TilemapSeries`, `SubMiddlewareBuilder`, `General`, `SpawnSyncOptions`, `TemplateExpression`, `OpenApi`, `SecurityRequirement`, `LabIcon`, `FlowParameter`, `Tx.Options`, `ReactiveControllerHost`, `StoreLike`, `ANodeExprLValueVar`, `GfxIndexBufferDescriptor`, `ConfigChoice`, `RawSeries`, `Core.Rect`, `HandlerType`, `AggregateValueProp`, `ElementsDefinition`, `IRequire`, `Progresses2Runners`, `NumberFormatOptions`, `IChunkHeader`, `Faction`, `Targets`, `ESTree.ImportDeclaration`, `ContentProps`, `IToolbarItemProps`, `CapacityProviderStrategyItem`, `YTMember`, `AttrValuesStore`, `StyleNode`, `RouteExecutionFromOutput`, `TournamentRecordList`, `Model.Book`, `PvsFile`, `ServiceJSON`, `ImageUrlOptions`, `VerifyCallback`, `PIXI.InteractionEvent`, `ThyAnchorLinkComponent`, `ListStreamsCommandInput`, `EntityStateResponse`, `PaymentData`, `IMap`, `FrescoError`, `CommandFlags`, `BackendSrv`, `MessageEmbed`, `PacketNumber`, `AtomOrString`, `MergeDeclarationMarker`, `IDatArchive`, `IErrData`, `QueryLeaseResponse`, `testing.ApplicationEnv`, `IPaneRenderer`, `ClientFileSearchItem`, `DebouncedFunction`, `Browser.WebIdl`, `Ajv`, `IICUMessage`, `AsyncLogger`, `PopupAlignment`, `Parsers`, `ReturnNode`, `DisplayContext`, `FunctionsMetadata`, `requests.ListRunLogsRequest`, `CameraOptions`, `FeedService`, `PointInTimeFinder`, `Tip`, `DiscoverLegacyProps`, `MockTokenTransferProxyContract`, `ContainerContent`, `FeatureFlags`, `IThread`, `TypeC`, `UninstallEventData`, `SessionDescription`, `PipelineTarget`, `CategoryStub`, `Yendor.BehaviorTree`, `Orientation`, `TokenlonInterface.TxOpts`, `ComponentRendering`, `FocusRingOptions`, `OpConfig`, `IScene`, `WorkingDayOfYearByMonth`, `IConfig`, `SVGO`, `SavedObjectsFindResponse`, `BedrockFile`, `VariableColor`, `DataModel.ColumnRegion`, `DrawParams`, `vscode.Webview`, `CtrExpBool`, `PubSubListener`, `ValidatePurchaseHuaweiRequest`, `KeysSource`, `SceneGroup`, `PureSelectors`, `SendOverrides`, `CompilerFsStats`, `DMMF.SchemaArgInputType`, `ConfigFactory`, `ts.Modifier`, `VisualizationsAppExtension`, `TabComponent`, `Markdown`, `MappedType`, `RequiresRuntimeResult`, `AStarNode`, `GroupedObservable`, `VertexAttributeGenDef`, `Float32BufferAttribute`, `MemberFieldDecl`, `ISettingsContext`, `TextRow`, `TransactionLog`, `ClassTransformOptions`, `HTMLTableColElement`, `JsxText`, `ObjExplorerObjDescriptor`, `ITriggerContructorParams`, `BreakNode`, `jdspec.SMap`, `HeftEvent`, `d.RuntimeRef`, `FailedImport`, `ExecContext`, `PayoutMod`, `AksClusterConfig`, `Matrix4`, `Period`, `RequestParameters`, `Positioned`, `NVMEntryName`, `SvgViewerConfig`, `IABIMethod`, `ISection`, `ParsedDocument`, `HiddenProps`, `CmsEntry`, `TestImage`, `BarTuple`, `GitlabAuthTokenRepository`, `DTO`, `CustomTransformers`, `GameResult`, `ScriptTarget`, `SchemaValidationResult`, `IColumnConfig`, `MockAlexa`, `BridgeableChannel`, `Program`, `IConnectionPageProps`, `IDataFilterResultValue`, `MutationTuple`, `PDFAcroListBox`, `string`, `JQLite`, `TraceContext`, `TimeOptions`, `ClusterConfig`, `AuthenticationInstruction`, `FIRDocumentSnapshot`, `PFS_Config`, `HomeView`, `BuildConfiguration`, `ClassEntity`, `BlockHeader`, `StoreType`, `Recipe`, `InsertContext`, `PQP.Language.Type.TPrimitiveType`, `ScheduledOperationDetails`, `IntegerRangeQuantifier`, `SWRConfiguration`, `SectionOptions`, `StateDeclaration`, `DbSeed`, `MatOptionSelectionChange`, `MiddlewareFn`, `IBaseAddressAsyncThunk`, `PyteaWorkspaceInstance`, `CliGlobalInfo`, `DocumentationResult`, `BaseBigNumber`, `SheetsArray`, `DatabaseIndexingService`, `ServiceInterface`, `DistrictsDefinition`, `ICollectionTrigger`, `TypedRequest`, `IJobFile`, `InjectedMetamaskExtension`, `IVisualizerStyle`, `MappedStates`, `MentionInfo`, `GCFBootstrapper`, `MockContainerRuntimeFactoryForReconnection`, `TreeRepository`, `CryptoKey`, `Hardfork`, `browser`, `IInviteGroupUsersOptions`, `Node.JSON`, `GetStagesCommandInput`, `SfdxFalconRecipeJson`, `BookmarkIdMapping`, `TheBigFanStack`, `PlanarMaskBaseTool`, `CfnApi`, `ICourse`, `MaybeVal`, `CubemapSky`, `IEnemy`, `FileError`, `SetNode`, `ForgeModMcmodInfo`, `ReadableStream`, `Validators`, `MultiKey`, `IReducerContext`, `ScrollbarOptions`, `DatabaseObject`, `ConsoleMessageType`, `CreatePageReq`, `Contacts`, `TransformableInfo`, `ModelInstance`, `ResponseToActionMapper`, `TextPossibilities`, `AnalyzerFileInfo`, `DateTimePatternFieldType`, `Functor`, `SEMVER`, `RefLecture`, `TypedEmitter`, `SnapshotMetadata`, `InputSpec`, `FileSystemManager`, `HandleType`, `VersionResult`, `DOMNode`, `DecodeResult`, `Viewer.SceneGroup`, `MiddlewareCallback`, `IndexData`, `DaffPaypalTokenResponse`, `FlamegraphNode`, `VoidType`, `ParamSpecValue`, `CompiledComponent`, `TileAniSprite`, `AstEntity`, `SetToken`, `OutlineDoOauthStep`, `TsCohortDateRangeComponent`, `MessageBoxReturnValue`, `ObjectExplorerService`, `BlockFragment`, `DecoratorConfiguration`, `React.ForwardRefRenderFunction`, `ForwardRefComponent`, `ElementPosition`, `QueryArray`, `UVTX`, `InternalQueryHandler`, `BuddyWorks`, `APIProvider`, `PieSeries`, `CommandsCache`, `JGadget`, `StartCallOptions`, `InvalidVPCNetworkStateFault`, `BufferGeometry`, `StringifyContext`, `HDOMImplementation`, `RenderTargetTexture`, `HKT`, `RuleCondition`, `NavigationOptions`, `IViewEntity`, `MergedProblem`, `CacheBehavior`, `FBXReaderNode`, `WriteTransactionReply`, `CRDPChannel`, `IAggType`, `EnhancedStore`, `SizeT`, `SchemaRegistry`, `ModelMetadata`, `CSI`, `HashPair`, `ImageView`, `ParsedMail`, `ApplicationSettings`, `FormSubmissionState`, `InsertionEdit`, `LoggerAdapter`, `SpaceId`, `LitecoinBalanceMonitorConfig`, `Coords`, `RoleListContext`, `ChainParam`, `td.Action1`, `PhaseEvent`, `Sourcelike`, `interfaces.IExtensionConfiguration`, `KSolvePuzzleState`, `AppxEngineStep`, `ISwidget`, `RTCRtpHeaderExtensionParameters`, `DescribeScheduledActionsCommandInput`, `IPizzasTable`, `protos.common.MSPPrincipal`, `Skill`, `SortingType`, `browser.management.ExtensionInfo`, `PanGestureHandlerGestureEvent`, `BufferType`, `LSTM`, `ExpressionRenderer`, `DescribeDomainAutoTunesCommandInput`, `CharsetNameContext`, `ISnippet`, `ExpressRequest`, `TypeArray`, `DeleteQueryBuilder`, `UIView`, `SessionInfo`, `ModuleLoader`, `AnimationControls`, `DataUnitUp`, `DocNode`, `FieldInfo`, `SocketClient`, `requests.ListAutoScalingPoliciesRequest`, `TObjectProto`, `RemixConfig`, `CanAssignFlags`, `MDCTextFieldFoundation`, `DebugProtocol.DisconnectArguments`, `PropDecorator`, `IntegerParameterRange`, `DeleteAppInstanceUserCommandInput`, `RedirectOptions`, `DeviceManager`, `TComponent`, `ToolsWorkspaceCommandResponse`, `VanessaDiffEditor`, `Datapoint`, `HistoryLocation`, `GanttItemInternal`, `DescribeConfigurationCommandInput`, `ApplicationType`, `INanoDate`, `CredentialsOptions`, `IListenerOptions`, `VizChartPanel`, `ChlorinatorState`, `CdsNavigationStart`, `PageFlip`, `BMMessage`, `VariantForm`, `NavbarElementProps`, `Method`, `MapInfo`, `TVEpisodeDAO`, `FormatContext`, `CancelFnType`, `PackageType`, `SetupPlugins`, `HierarchyIterable`, `IHawkularAlertQueryResult`, `IRepository`, `APIGatewayProxyEventV2`, `INavLink`, `ThyDialogConfig`, `NormalizedProvider`, `AmmFakeInstance`, `DisassociateMembersCommandInput`, `AuthStateType`, `IActor`, `OidcClientSession`, `StringDecoder`, `EngineOpt`, `IndexSignatureDeclaration`, `ZeroPadding2D`, `IRunData`, `ConnectionRecord`, `ClassWithMethod`, `FrameNodePort`, `PatternUnknownProperty`, `requests.ListWindowsUpdatesRequest`, `TBuffer`, `TexMtxMapMode`, `Intersection`, `DataGrid.Style`, `PlaneData`, `DeploymentExtended`, `B6`, `DomainType`, `BaseProperty`, `ComponentFactoryResolver`, `AuthorizationMetadata`, `Timeline.State`, `DAVResponse`, `SimulcastLayers`, `OneToOneOptions`, `PerformanceResourceTiming`, `VariantAnnotation`, `IIOPubMessage`, `Hertz`, `GitBlameLines`, `Kleisli`, `SignupResponse.AsObject`, `PropertySchema`, `DecodedDeviceType`, `ThyCollapsePanelComponent`, `ListJobRunsRequest`, `GetKeyboardResponseOptions`, `IEmailProvider`, `IntervalTree`, `INumberFieldExpression`, `firebase.firestore.WhereFilterOp`, `Formatters`, `IConnection`, `XrmStatic`, `SelectNode`, `FilteredPropertyData`, `PageCloseOptions`, `ScriptDataStub`, `GPUDevice`, `UnionOrFaux`, `MenuDataAccessor`, `ListJobsRequest`, `RC`, `EmbeddableOutput`, `ScriptSource`, `EnclosureShape`, `DeleteQuery`, `Histogram`, `ComputedGeometries`, `BrowserContext`, `IStream`, `ClassifiedParsedSelectors`, `InfrastructureRocket`, `PureVisState`, `UILayoutViewController`, `WithNodeKeyProps`, `ParquetField`, `ValueCtx`, `ViewportService`, `Motor.StopActionValue`, `RingBuffer`, `pulumi.Output`, `QName`, `INotification`, `WebGL2RenderingContext`, `LatexAst`, `ValidatorResponse`, `BotsState`, `T7`, `SbbDialogConfig`, `instantiation.IConstructorSignature3`, `CredentialCreationOptions`, `IHillWarriorResult`, `RepositorySettingsValidation`, `FunctionDeclarationStructure`, `IThriftField`, `P.Logger`, `ERC721`, `ThemeFromProvider`, `Inputs`, `AudioData`, `evt_sym`, `ScrollAreaContextValue`, `TableStringWriter`, `IMusicRecordGrid`, `WebSocketMessage`, `FolderWithSubFolders`, `PerfGroupEntry`, `Protocol`, `TimelinePath`, `Ring`, `HTMLDataListElement`, `ReflectionObject`, `TSocketPacket`, `GMxmlHttpRequestResponse`, `DescriptorProto_ReservedRange`, `JsonRPC.Request`, `ExternalFile`, `ExceptionListItemSchema`, `FailoverGroup`, `SideBarView`, `ReferenceToken`, `Header`, `UpdateConfigurationResponse`, `DebugThread`, `api`, `AttributeDefinition`, `ValidationRecord`, `ServiceSetup`, `SoftwarePackage`, `formField`, `ValidationError`, `ActionsStage`, `TriggerEventCommand`, `MessageIDLike`, `ToolbarItem`, `TaskPoolRunResult`, `ServerLock`, `AddRepositoryCommand`, `AuthorisationStore`, `ParticipantListParticipant`, `NzCellFixedDirective`, `VoteAccountAsset`, `IntervalTimeline`, `VertexList`, `ShippingState`, `Mesh`, `SavedObjectsStart`, `RangeEntry`, `TilingScheme`, `DOMWrapper`, `LinearFlow`, `PromiseReadable`, `Map`, `NotDeciderInput`, `StyleRendererProtocol`, `PermissionObjectType`, `GeneratorVars`, `CalendarManager`, `StyleBuildInfo`, `JSMService`, `InstantiationContext`, `ParagraphProps`, `YieldExpression`, `HistoryStore`, `StylableResults`, `TypedHash`, `VideoInputType`, `DiscordEvents`, `ContentBlock`, `PublicKeyInfo`, `DestinationConfiguration`, `DiagnosticRuleSet`, `AuthTokenInfo`, `ValidationBuilder`, `GroupUserList`, `ILayoutRestorer`, `ObjectMapper`, `PhrasesBuilder`, `SuiCalendarItem`, `TLIntersection`, `planner.PlannerConfiguration`, `Stat`, `IR.BasicBlock`, `PackedBubbleChart`, `Wrapap`, `Eris.Message`, `LinkedWorkTree`, `Organisation`, `ResourceModel`, `IFluidDataStoreRuntime`, `ExtractRef`, `Vec4Sym`, `AggregateField`, `BottomBarItem`, `ResizeObserverService`, `PlayerPosition`, `DateRangeShortcut`, `ChartEvent`, `RegisteredDelegate`, `NgForage`, `IBlockType`, `KeybindingRegistry`, `QueryLang`, `Lesson`, `LayoutItem`, `ConnectionInformation`, `ImageScrollBar`, `DescribePackagesCommandInput`, `FabSpecExports`, `Spotilocal`, `Node.NodeMessage`, `ProtoFab`, `requests.ListGoodBotsRequest`, `CurlyOptions`, `GaugeVisualizationState`, `PolyBool.Shape`, `QueryParserVisitor`, `TypeAliasInfo`, `MediaView`, `CreateOfficeHour`, `ErrorOptions`, `CommentPattern`, `DedentToken`, `KernelSpec`, `AriaDescriber`, `INormalizedMessage`, `SendResult`, `MediaPlayer`, `TemplateIntegrationOptions`, `TVEpisode`, `JSONSchema7`, `MockToken`, `ContainerOS`, `SyncHandlerSubsetOf`, `MIROp`, `OptionalMaybe`, `LayerNode`, `AuthStateModel`, `IndexResponse`, `WebpageMetadata`, `NoticeEntity`, `SkillService`, `LinkChain`, `BlockFactory`, `CRUDEvents`, `PartBody`, `InterfaceBuilder`, `Multisig`, `ParticlesFlyerView`, `DashboardContainer`, `Sessions`, `TestTag`, `QuickReplyItemProps`, `TheCloudwatchDashboardStack`, `EmitParameters`, `sdk.IntentRecognitionCanceledEventArgs`, `MIRRecordType`, `Anthroponym`, `ITrackCallback`, `OsmWay`, `CloneFunction`, `ControlParams`, `QPixmap`, `ResolvedPackage`, `SPBatch`, `BlockModel`, `UpdateTagDto`, `Screens`, `RequestConditionFunctionTyped`, `Playlist`, `UpdateEmailTemplateCommandInput`, `BaseFilterInput`, `IHttpPostMessageResponse`, `ForEachPosition`, `BoardState`, `ECSEntity`, `RemirrorManager`, `ContractEntry`, `FindCursor`, `InspectionFailure`, `ParseNodeType`, `AxisContext`, `FormItem`, `MenuTargetProps`, `UnixTime`, `CDPTarget`, `VisualConstructorOptions`, `MediaKeyComponent`, `CustomUrlAnomalyRecordDoc`, `GetPrTimelineQuery`, `ConditionalTypeNode`, `monaco.editor.IReadOnlyModel`, `AsyncMachine`, `Vorgang`, `VdmNavigationProperty`, `WorkRequestLogEntry`, `MixItem`, `TagFilter`, `DeleteLoadBalancerCommandInput`, `models.IArtifactProvider`, `JssContextService`, `Frustum`, `LoudMLDatasource`, `Activator`, `IUIEvent`, `RuleSummary`, `MemberRef`, `PlayerPieceLocation`, `BodyOnlyAcceptsNumbers`, `RoleDto`, `OutputTargetDocsReadme`, `d.CompilerModeStyles`, `ComparableValue`, `Classifications`, `MediaProps`, `UseRefetch`, `mongoose.Model`, `ChainConfig`, `ParserType`, `Deno.Process`, `DataValues`, `HomePublicPluginSetup`, `AuxBot3D`, `SerializeOutput`, `UpdateUserProfileCommandInput`, `CallIdRef`, `JWTTokenModel`, `SessionOptions`, `AccountBalancesResult`, `RestRequestMethod`, `ConversationContent`, `CustomPaletteParams`, `ArangojsResponse`, `CrochetPackage`, `CodeVersions`, `BaseAddress`, `TimedVariableValue`, `Dice`, `C2`, `PanelOptions`, `IHTMLElement`, `jest.Mocked`, `vscode.QuickPick`, `HTMLIonLoadingElement`, `ObjectTypeComposerFieldConfigDefinition`, `GitCommit`, `ANK1`, `LineupPlayerPosition`, `AutoforwardState`, `CreateConnectionCommandInput`, `JsonHttp`, `CodeGenModel`, `VideoGalleryStream`, `TSrc`, `IProtoBlock`, `ListDomainsCommandInput`, `GPUImageCopyTexture`, `GenericMessage`, `ReplaceResult`, `OrganizationRepository`, `AccountHasFundsFn`, `SerializedChangeSet`, `DeploymentEntry`, `GameRegistry`, `Transpose`, `DefinitionProvider`, `builder.UniversalBot`, `IconRegistryService`, `PublicKeySection`, `VoiceState`, `JobService`, `localVarRequest.Options`, `ResolveXName`, `BitArray`, `ResourceId`, `INPUT_SIZE`, `ResponderActivityType`, `CliAction`, `SecurityKey`, `Tardigrades`, `FSNetworkRequestConfig`, `InterpolatorFactory`, `BBox_t`, `QueryStatus`, `FSTree`, `LibraryBuilderImpl`, `Texture2D`, `FlexPlacement`, `NodeOptions`, `ITilemap`, `MangaListStatusFields`, `ProofRecord`, `Email`, `DataPositionInterface`, `VertexInfo`, `AccountSteam`, `SinonMock`, `uinteger`, `PackageDiffImpl`, `NarrativeSchema`, `ColliderData`, `SMTPServerSession`, `TextureBlock`, `IfNode`, `ContractClass`, `SourceEditorArgs`, `Future`, `Subscription`, `IFloatV`, `CancelWorkRequestResponse`, `LiveShare`, `ValidationErrorPath`, `YearAggregations`, `ProxyServerSubscription`, `ts.CompletionInfo`, `CurveLocationDetailPair`, `EventLocation`, `Trie`, `HTMLScLegendElement`, `ParsedPlacement`, `typedb.DBMethod`, `ISettingRegistry.ISettings`, `GetListParams`, `IOptimizelyFeature`, `DependencyKey`, `EnvVars`, `App.services.IUriService`, `Subtract`, `UpdateCourseOverrideBody`, `IWallet`, `CodeFixContext`, `BoundPorts`, `GetUserSettingsReadModel`, `Rectangular`, `ReactTestRenderer.ReactTestRenderer`, `OpenFileDialogProps`, `ArrayDiffSegment`, `TestSource`, `StartJobRunCommandInput`, `TypedComponent`, `S3StorageProvider`, `CartState`, `LabaColor`, `SparseGrid`, `SegmentAPISettings`, `LoadParams`, `Length`, `ProxyableLogDataType`, `TestModuleMetadata`, `IGetJobPresetInput`, `Knex.Raw`, `ContextualTestContext`, `Unsubscriber`, `CssAnimationProperty`, `TimeLimitItem`, `SUCUpdateEntry`, `ICategoryInternalNode`, `FlagshipTypes.Config`, `requests.ListCpeDeviceShapesRequest`, `ILineTokens`, `OffchainTx`, `ModuleSymbolTable`, `WeierstrassPoint`, `ImGui.Access`, `Paginator`, `FlowLogInformation`, `ts.IndexSignatureDeclaration`, `DurationMs`, `CompileResult`, `ParameterizedString`, `IGetLanguagesResponse`, `ActionSource`, `MemoryStorageDriver`, `ExploredCohortState`, `theia.Disposable`, `AuthContext`, `Chat`, `RenderLeafProps`, `NodeWithOrigin`, `SignatureVerifier`, `ClassScheme`, `DependencyList`, `ComponentBuilder`, `ts.TypeReferenceNode`, `CommandLineOption`, `JSDocParameterTag`, `I2CWriteCallback`, `ContextContributor`, `DecodedIdToken`, `BatchRequest`, `IRow`, `NetGlobalMiddleware`, `ValueRef`, `ParseConfigHost`, `Priority`, `IParsedError`, `MapService`, `Reffer`, `CommunityDataService`, `GeometricElement2dProps`, `TElement`, `ItemSliding`, `DeployedCodePackage`, `ImportInfo`, `ParamItem`, `EmptyClass`, `USBInterface`, `ClassPeriod`, `GossipFilter`, `TileLayer`, `CyclicDependencyGraph`, `ts.JsxAttribute`, `GitHubAPI`, `d.WatcherCloseResults`, `FormatId`, `InitSegment`, `DateOption`, `TagToken`, `LegacyField`, `JPAChildShapeBlock`, `BinaryWriter`, `monaco.languages.LanguageConfiguration`, `VFC`, `ItemDefinition`, `IFilterProps`, `AsyncState`, `ITopDownGraphNode`, `ImplementedFunctionOptions`, `PiLanguage`, `Model.Page`, `MainHitObject`, `ServeOptions`, `S3Client`, `DateValue`, `ServiceClientCredentials`, `AllOptions`, `OutputTarget`, `MemberForm`, `XYChartSeriesIdentifier`, `Kernel`, `Wah`, `SubgraphPlaceholder`, `TransformerDiagnostics`, `ClaimItem`, `LoginStatusChecker`, `IAddressState`, `IExpectedVerifiableCredential`, `GestureController`, `MassetMachine`, `AccountApple`, `TinymathFunction`, `FailoverDBClusterCommandInput`, `ActionResultComboCtx`, `ScaleCreationContext`, `TimelineTheme`, `NgParsedDecorator`, `IVariable`, `BoolLiteralNode`, `I18nConfig`, `ExpressionRunnerShorthandConfig`, `ICombinedRefCheck`, `ListTournamentRecordsRequest`, `vd.createProperties`, `SeriesCompareFn`, `ListComprehensionForNode`, `RefreshAccessTokenAccountsRequestMessage`, `IOHandlerForTest`, `NodeName`, `BackupFile`, `JobLogOption`, `LogAnalyticsSourceEntityType`, `IMutableVector4`, `FolderDetector`, `NullConfiguration`, `ARAddImageOptions`, `GraphQLScalarTypeConfig`, `TransactionOp`, `RequireNode`, `WorkspaceSymbolParams`, `SPClientTemplates.RenderContext_FieldInForm`, `RenderableOption`, `ResolverResolveParams`, `WebConfig`, `IDomMethods`, `FieldFormatConvertFunction`, `IGroupInfo`, `TMessageContent`, `Aggregation`, `GluegunPrint`, `AuthenticationParameters`, `requests.ListVolumeGroupsRequest`, `Cypress.PluginEvents`, `CodeGenResult`, `SNSInvalidTopicFault`, `LocalVarInfo`, `SObjectConfig`, `QualifiedNameLike`, `ANTLRBackend`, `LambdaRestApi`, `GetFederationTokenCommandInput`, `ConfigConfigSchema`, `KibanaPrivileges`, `AddressBalance`, `IRefCallback`, `ts.ResolvedModule`, `ReadModelQuery`, `FilterDef`, `TPlacementMethodArgs`, `SoftwareKeyProvider`, `IDocumentStorage`, `ConfigAggregator`, `MaskingArgs`, `E2`, `ErrorMessageProps`, `TimeStamp`, `SymbolKind`, `ResourcePolicy`, `DocgeniContext`, `FieldSchema`, `LoopOutParameter`, `BillName`, `ItemWithAnID`, `UnionTypeNode`, `JSZipObject`, `ForwardRefExoticComponent`, `BlockStateRegistry`, `ObjectDictionary`, `HTMLIonActionSheetElement`, `core.BTCGetAccountPaths`, `IDoc`, `SingleConnection`, `TileCoordinate`, `ShareService`, `Worksheet`, `UICollectionViewFlowLinearLayoutImpl`, `IAstMaker`, `PointerInput`, `ZeroBalanceFn`, `ExpressionWithTypeArguments`, `NVMJSON`, `DiscordMessage`, `SwatchBookProps`, `PushSubscription`, `NodeConstructor`, `DeploySuccess`, `FileIdentifier`, `TwingFilter`, `SubmitFeedbackCommandInput`, `ModelPredictArgs`, `ISampler2DTerm`, `ShadeCoverOptions`, `AndroidPermissionResponse`, `PositionRange`, `WatcherFolder`, `AmbientZone`, `CodeAction`, `PlatformEvent`, `Chest`, `LocaleProps`, `Looper`, `AuthType.Standard`, `V2BondDetails`, `NSMutableURLRequest`, `IState`, `SignedState`, `DefaultExecutor`, `MiniNode`, `ast.NodeAttributes`, `IdentityTest`, `DirectionType`, `DraftEntityInstance`, `MenuModel`, `ToString`, `StashResult`, `ImageCanvas`, `utils.BigNumberish`, `PetService`, `ButtonTween24`, `ComputedField`, `IAureliaProjectSetting`, `IStatistics`, `IPath`, `Base`, `Reason`, `FilterMeta`, `ArtifactDelta`, `CCashCow.Payment`, `ServiceLogger`, `DebugBreakpointDecoration`, `ArgumentParser`, `CategorySortType`, `GraphQLTypeResolver`, `IMessageEvent`, `FileModel`, `GitAuthor`, `OtokenFactoryInstance`, `ResolveResponse`, `IOptionFlag`, `DomPath`, `MutationCallback`, `GX.TevColorChan`, `ListDetectorsCommandInput`, `ListProjectsCommandInput`, `FunctionBinding`, `V1PersistentVolume`, `OrganizationState`, `PartialApplicationConfig`, `CurvePrimitive`, `SingleTablePrettyfier`, `Claimants`, `AdonisApplication`, `Cave`, `WikiItem`, `JsonStringifierParserCommonContext`, `SMTPServer`, `ImportResolverFactory`, `AuthContextState`, `PlayMacroAction`, `PaginationNavProps`, `thrift.TField`, `Decision`, `ActionStatusEnum`, `Orphan`, `QueryInterface`, `VocabularyStatus`, `Neutrino`, `WalkNext`, `NgGridItemPosition`, `ErrorChain`, `RowNode`, `MDCBottomNavigationBar`, `Numeric`, `SceneFrame`, `ConfigsService`, `IRequestUserInfo`, `React.ReactElement`, `CreateUserProfileCommandInput`, `DomainService`, `ListWorkRequestsRequest`, `MouseEventHandler`, `GX.TexGenType`, `requests.ListSubscriptionsRequest`, `NamedTupleMember`, `ActionsConfig`, `EdgeMemento`, `Endpoint`, `JsonContact`, `HttpEnv`, `TaskService`, `PokemonType`, `PlatformService`, `IMetricsRegistry`, `Workspaces`, `RlpItem`, `SBDraft2CommandOutputParameterModel`, `MovieOpts`, `GossipPeer`, `OffsetPosition`, `NexusGraphQLSchema`, `GetPublicAccessBlockCommandInput`, `SourceBreakpoint`, `CRG1File`, `ParsedQuery`, `Matrix2d`, `PendingTransaction`, `FormatterOptions`, `FocusedElement`, `ITelemetryLogger`, `IndicatorForInspection`, `PackageManagerPluginImplementation`, `MangaFields`, `AutoOption`, `TNSPath2D`, `SeriesPlotRow`, `WalkerArgs`, `ShaderNode`, `FeatureSymbology.Overrides`, `DbIncrementStrategy`, `ProcessedBundle`, `ProgressToken`, `RouterInstruction`, `TargetType`, `CreateJobDetails`, `GlobOptions`, `RequirementBaseModel`, `AttachmentItem`, `TSInstance`, `NotebookWorkspaceName`, `PlayerIndexedType`, `SubtitlesCardBase`, `RenderSource`, `Genre`, `ITokenMatcher`, `BlendMode`, `ResultData`, `CommandLineOptions`, `FlatTreeControl`, `MessageContent`, `IKey`, `IRandomAccessIterator`, `Erc20Mock`, `GraphQLInterfaceType`, `DeleteLoggingConfigurationCommandInput`, `MediaChange`, `CreateReplayDto`, `HttpContextContract`, `ListCustomPluginsCommandInput`, `JsExpr`, `RnM2Material`, `CoverageCollection`, `SelectionItem`, `IRawBackupPolicy`, `ThyPopoverConfig`, `React.CompositionEvent`, `BookingsModel`, `Foo`, `ReadFileOptions`, `FormatMessage`, `ATNConfigSet`, `CompileRepeatUtil`, `ISearchRequestParams`, `ServerSideProps`, `SizeData`, `PipelineId`, `PkgJson`, `ExpandedTransitionListener`, `ParentComponent`, `NotImplementedYetErrorInfo`, `EtjanstChild`, `ProjectBuildOptions`, `PlainObject`, `N6`, `CreateAttendeeRequestItem`, `SubscriptionClient`, `CustomIntegrationRegistry`, `ThySkeletonComponent`, `I18NextPipe`, `core.Connection`, `LocatorDiff`, `Options`, `CliParam`, `CommandOutputBinding`, `MonitoringOutputConfig`, `IDataFrame`, `PhysicalElement`, `CollapsibleListProps`, `MessageActionRow`, `Merge`, `Catalog`, `DogePaymentsUtilsConfig`, `Messaging`, `vscode.TextEditor`, `ControllerConfig`, `MatchResult`, `BaseContract`, `CalendarManagerService`, `Incident`, `ReuseTabNotify`, `ElementInstance`, `ProcessingPayload`, `CompositeDraftDecorator`, `STIcon`, `ECDb`, `KernelInfo`, `SchematisedDocument`, `ArrayProperty`, `TranslationFacade`, `AnyRect`, `LayoutSettings`, `PointMarkerOptions`, `ReturnTypeFunc`, `RegistryMessage`, `ObjectCacheState`, `MixArgs`, `Highcharts.VMLDOMElement`, `LangChangeEvent`, `ModelSnapshotType`, `AggField`, `FrontMatter`, `Invite`, `EventProcessor`, `sdk.TranslationRecognitionResult`, `FastifyTypeBoxHandlerMethod`, `Intent`, `GregorianDate`, `Track`, `ZoweUSSNode`, `tfc.io.IOHandler`, `VerifyOptions`, `FileIconService`, `ChannelMessageSend`, `StepNode`, `ProvenClaim`, `ContentDirection`, `IChildrenItem`, `OutputTargetEmptiable`, `FormMethods`, `RequestBase`, `PopupMessage`, `Protocol.Input.DragData`, `ParsedSelectorAndRule`, `RegisteredSchema`, `thrift.TType`, `PromiseEmitter`, `StringCodeWriter`, `TransactOptions`, `ProposalTemplateService`, `Peripheral`, `ComponentTester`, `ISetOverlapFunction`, `UpdateFileService`, `IColorEvent`, `MonitoringGroupContext`, `PlanService`, `TaskInstance`, `LocationType`, `KC_PrismData`, `ProjectMode`, `NamedCollection`, `GithubUserRepository`, `CombatStateMachineComponent`, `SettingRepository`, `FlushMode`, `AnimGroupData`, `TreeListComponent`, `TName`, `EosioTransaction`, `TypeSourceId`, `ChangeHandler`, `CookieManager`, `AsyncIterator`, `Rect2`, `LifecycleChannel`, `TextFieldWithSelectionRange`, `ConditionFn`, `SourceDataItem`, `Logger`, `IGetTimeLogConflictInput`, `FormAction`, `SourceTargetFileNames`, `UIBeanHelper`, `LineAnnotationSpec`, `PublishResponse`, `HttpInterceptController`, `SymlinkInode`, `XSelectNode`, `IDependency`, `Preferences`, `MicroframeworkSettings`, `BlinkerResponse`, `ComputedBoundsAction`, `DropTableNode`, `FnAny`, `GetFileOptions`, `Quantity.REQUIRED`, `AggregateSpec`, `restify.Request`, `OpenPGP.key.Key`, `FindOneOrFailOptions`, `MongooseSchema.Types.ObjectId`, `CreateElementRequiredOptions`, `ZodUnion`, `AccordionProps`, `Applicative2`, `GReaderConfigs`, `ValueMetadataBuffer`, `RtcpSourceDescriptionPacket`, `SvelteElement`, `NetType`, `ExtensionState`, `CreateExtensionPlugin`, `pd.E2EPageInternal`, `ClientAuthentication`, `MemoryView`, `panel_connector.MessageHandler`, `AdapterGuesser`, `FormFieldConfig`, `PoolType`, `LoadOptions`, `DatasetMemberEntry`, `TionDeviceBase`, `EnvOptions`, `JsonRPC.Response`, `Subspace`, `SQLTransaction`, `BaseDbField`, `GroupCurrencyCode`, `Requester`, `AccountParser`, `Clock`, `DebugProtocol.StepOutResponse`, `MXFloatingActionButtonLocation`, `Comment`, `ListAvailabilityHistoriesRequest`, `FromYamlTestCaseConfig`, `WSClient`, `Unknown`, `PackagesWithNewVersions`, `DayPickerContextValue`, `Sort`, `DecodedJwt`, `ImportGroup`, `JobState`, `TypePairArray`, `FakeContract`, `WebSocketConnectCallbacks`, `cp.ChildProcess`, `FeatureAst`, `BandHeaderNS.CellProps`, `ErrorStateMatcher`, `CrudGlobalConfig`, `RegisterCr`, `EditorManager`, `PlatformType`, `ProxyRequestResponse`, `LogLayout`, `QueryRef`, `OutputTargetDistLazy`, `ShortTermRetentionPolicyName`, `TargetedEvent`, `MapProps`, `UnknownParticipant`, `PublicMethodsOf`, `AWSAccount`, `EngineAttribute`, `Guild`, `ICliCommandOptions`, `ForNode`, `LabelBullet`, `ArcProps`, `PlaceholderMapper`, `types.ScriptType`, `TelemetryRepository`, `Response.Response`, `ForInitializer`, `ParsedMapper`, `WebsocketRequest`, `HealthCheck`, `TransferHotspotV2`, `KeyMap`, `ExposureMode`, `ServerView`, `IMetricsService`, `Locker`, `ListBotsCommandInput`, `AddPermissionCommandInput`, `PatchOptions`, `HStatus`, `DTONames`, `ThyUploadResponse`, `ISubmitData`, `NpmPackage`, `Ribbon`, `CardContext`, `P2P`, `InternalKey`, `AnalyzerState`, `Types.OutputPreset`, `StaffService`, `AudioBuffer`, `ExpectedTypeResult`, `VisualUpdateOptions`, `AssertionExpression`, `d.OutputTargetDistCollection`, `CustomLoader`, `PortalConfig`, `MdcSwitch`, `CLR0_MatData`, `StatusProps`, `ProofNodeX`, `BrowserDownloads`, `ConnectionInfoResource`, `ListCertificateAuthoritiesCommandInput`, `BigFloat53`, `DoneCallback`, `XliffMerge`, `RandomUniformArgs`, `VectorLayer`, `LoginScript`, `ListrTask`, `CAC`, `vscode.Progress`, `Lint`, `requests.ListVlansRequest`, `t_b79db448`, `SearchQueryUpdate`, `DataBlock`, `TDiscord.Client`, `StylePropConfig`, `IIPCClient`, `ConfigurationCCAPISetOptions`, `PriceAxisViewRendererCommonData`, `CommonDialogService`, `Raw`, `IAddressSpace`, `MarkerProps`, `Refactoring`, `NET`, `KeywordErrorCxt`, `JWTService`, `GfxRenderPipelineDescriptor`, `UpdateError`, `EventNameFnsMap`, `AccountsScheme`, `QuadrantRefHandler`, `NumberToken`, `OutputDataConfig`, `GetOptions`, `Ruleset`, `IndexingStatusResolver`, `ILogOptions`, `ActionOptions`, `NormalisedSearchParams`, `WorkspaceMiddleware`, `BaseRedirectParams`, `IOrganizationBinding`, `MetaClient`, `Exit`, `MintAssetRecord`, `Jump`, `iff.IChunkHeader`, `HypermergeUrl`, `DescribeAppInstanceAdminCommandInput`, `JSONSchemaRef`, `HostWindowService`, `DatasetSummary`, `Recording`, `CreateSampleFindingsCommandInput`, `MeetingParticipants`, `Bytes32`, `MutationElement`, `ClipsState`, `AssembledTopicGraphics`, `ActorType`, `SpaceSize`, `ListThemesCommandInput`, `BodyPixConfig`, `FieldFormatsContentType`, `Payport`, `RenderPage`, `GoldenLayout.ContentItem`, `RawTypeInfo`, `ITelemetryProperties`, `AnyNativeEvent`, `ThreadKey`, `DeleteRequest`, `TEUnaryOp`, `EditableProps`, `EffectContext`, `NzMessageService`, `TimePickerControls`, `Active`, `ITransValueResult`, `DeleteApplicationCommandInput`, `TokenManager`, `apid.RecordedTagId`, `MainAreaWidget`, `TLocaleType`, `ConfigurationsClient`, `ICommonHeader`, `d.OutputTargetDistTypes`, `StoreObjectArg`, `AndroidChannelGroup`, `PointerEventHandler`, `MinHeap`, `Device`, `EmacsEmulator`, `AfterWinnerDeterminationGameState`, `workerParamsDto`, `Milestone`, `CosmeticFilter`, `HeadElement`, `StartStop`, `ConfirmHandler`, `GetDMMFOptions`, `EntityProps`, `UnparsedSource`, `OnChangeValue`, `TestingRunOptions`, `SponsorOptionsOpts`, `ChartElementSizes`, `PeopleEmitter`, `ShortcutType`, `PBXGroup`, `MDCMenuSurfaceAdapter`, `Real_ulonglong_numberContext`, `SocketServer`, `ts.SemanticClassificationFormat`, `Events.predebugdraw`, `DepositTransaction`, `VirtualApplianceSite`, `TLE.Value`, `... 28 more ...`, `ICombiningOp`, `ObjectCacheService`, `MarkMessageAsSeenCommand`, `IPipe`, `FunctionMethodsResults`, `IEtcd`, `ListAppInstanceAdminsCommandInput`, `RendererAPI`, `SendEmailCommandInput`, `WordcloudUtils.PolygonPointObject`, `PacketRegistry`, `DebugProtocol.NextArguments`, `ChunkExtractor`, `RegExpReplacement`, `EmitBlockKind`, `ODataResponse`, `AutoSubscription`, `Caret`, `InvalidDatasourceErrorInfo`, `UpdateGatewayCommandInput`, `TableContext`, `SortedSet`, `ChildReferenceDetail`, `IsMutedChangedListener`, `CachedResource`, `DataService`, `Csp`, `IAmazonInstanceTypeOverride`, `RSAKey`, `TocState`, `sdk.DialogServiceConnector`, `DeployView`, `IIStateProto`, `Keystore`, `CdkColumnDef`, `TodoTaskList`, `ListItem`, `DataArrayTypes`, `HeaderSetter`, `StatResult`, `SnapshotProcessor`, `Threshold`, `FileCompletionItemManager`, `DoublyLinkedListNode`, `HeadlessChromiumDriver`, `IPlayerActionCallback`, `GetParameters`, `R.List`, `MetadataValueFilter`, `MediaRule`, `IZoweTree`, `IUIAggregation`, `ChildProcess`, `chrome.windows.Window`, `TableFinder`, `UserSettingsStorage`, `IExternalDeviceId`, `FilesMatch`, `ValidationErrorItem`, `GroupType`, `IItemTemplate`, `NvLocation`, `ts.ResolvedModuleWithFailedLookupLocations`, `SetupOptions`, `EpicMiddleware`, `JPAEmitterWorkData`, `tf.Tensor`, `AsToken`, `CertaConfig`, `AuthenticationProviderOptions`, `ReportManager`, `SharesService`, `RSAEncryptionParams`, `AppMountParameters`, `EvaluatedTemplateArgument`, `DefaultSequence`, `ICCircuitInfo`, `IFilterArgs`, `ASTResult`, `MockTrueCurrency`, `IPortal`, `GlobalStorageOptionsHandler`, `RoleValidationResult`, `nodeFunc`, `SyncMode`, `IAgent`, `SharingSessionService`, `Abbreviation`, `ListWorkspacesRequest`, `ContractParameter`, `UsePaginatedQueryReducerAction`, `TreeMate`, `SearchOptions`, `SwiftDeclarationBlock`, `ElementAspect`, `BootstrapOptions`, `GetResourceCommandInput`, `CertificateRule`, `DecodedAttribute`, `XPCOM.ContentFrameMessageManager`, `Long`, `IBaseEdge`, `Vector2d`, `ArrayIterator`, `nodeFetch.RequestInit`, `StoryObj`, `IntPairSet`, `SourceMapper`, `ArtifactEngine`, `u64`, `VertexType`, `MonitoredElementInfo`, `IHandlebarsOptions`, `ProductOperations`, `ComparisonOptions`, `StoryArchive`, `TtLCreatorOptions`, `JSONScanner`, `EditorConfiguration`, `models.NetCore`, `GetUsersRequest`, `MDCRadioAdapter`, `IOracleListener`, `EventTrigger`, `BeEvent`, `GeneratorTeamsAppOptions`, `RequestMethodType`, `CppSettings`, `ICreateOrgNotificationResult`, `FabricWalletRegistryEntry`, `ChangeDescription`, `View1`, `D3Selector`, `ScriptVersionCache`, `DeepType`, `StoreData`, `IAssetsProps`, `CriteriaNode`, `FileMetadata`, `FontStyle`, `QueryFilterType`, `Seg`, `PrimitiveNonStringTypeKind`, `TContainer`, `ContentManagementService`, `AlertNavigationRegistry`, `PoiLayer`, `CallError`, `ExpectApi`, `TokenGroup`, `IDriver`, `ODataQueryMock`, `FeedProviderType`, `TodosPresentST`, `RuleSet`, `ProfileResponse`, `CustomAvatarOptions`, `ISubject`, `FastPath`, `IPascalVOCExportProviderOptions`, `ICordovaLaunchRequestArgs`, `ActorRef`, `RemoveBuffEvent`, `CacheWithRedirects`, `DecryptedMessage`, `LoggingServiceConfiguration`, `NgxFeatureToggleRouteGuard`, `RoomReadyStatus`, `TransactionJSON`, `ReplacePanelAction`, `FormGroup`, `RemoteStoreOptions`, `IssueIdentifier`, `MinimalCancelToken`, `JsxTagNameExpression`, `IPrivateKey`, `ScriptElementKind`, `LabStorageService`, `SurveyMongoRepository`, `DescribeUserResponse`, `ComponentCompilerStaticEvent`, `TypeAttributeMapBuilder`, `CommandEventType`, `SubTiledLayer`, `AddressHashMode.SerializeP2PKH`, `BRRES.RRES`, `FileStream`, `EditLog`, `RelationInput`, `FormDependency`, `Filters`, `server.ManualServerConfig`, `WhiteListEthAsset`, `TokenDict`, `FieldDestination`, `IRole`, `CHILD`, `RequestPrepareOptions`, `CreateBundle`, `Chain`, `selectionType`, `ITBConfig`, `RawBackStore`, `DocumentStateContext`, `Entities`, `MagicLinkRequestReasons`, `PageMeta`, `LinkedIdType`, `PlanetApplicationRef`, `ImageHelper`, `DocumentTypeDef`, `ErrorDetail`, `DiscoveredMethodWithMeta`, `Messenger`, `ProgressState`, `RecordRepresentation`, `LanguageModel`, `events.Name`, `ButtonItem`, `DraftEditor`, `Conv2DInfo`, `GDQOmnibarMilestoneTrackerPointElement`, `TestCreditCardPack`, `InjectorModule`, `ComponentCompilerTypeReferences`, `ParseValue`, `UserProfile`, `O2MRelation`, `JimpImage`, `ConfigLogger`, `RefreshToken`, `AngularFireOfflineDatabase`, `UseProps`, `GlobalStringInterface`, `VerifyUuidDto`, `VisualViewport`, `Wei`, `Pod`, `Flo.EditorContext`, `RequestHandlerContext`, `JestTotalResults`, `IRenderLayer`, `TwitchBadge`, `IMapper`, `RepoInfo`, `LoadSettings`, `RequireOrIgnoreSettings`, `ParsedHtmlDocument`, `V1ClusterRoleBinding`, `GeoUnitIndices`, `java.lang.Object`, `DeleteClusterCommand`, `ObjectProps`, `ZoneType`, `SecuredFeature`, `SubmitFnType`, `BlockLike`, `PerformStatArgs`, `Interpolation`, `HSVColor`, `DashboardAppLocatorDefinition`, `RectAnnotationStyle`, `TimeConstraint`, `GunGraphNode`, `EzApp`, `vBlock`, `ColumnDefinitionNode`, `WikiFile`, `GroupUserList_GroupUser`, `GeoCoordinates`, `NxValues`, `echarts.EChartsOption`, `TeamList`, `SavedObject`, `SignedByDBInterface`, `RemoveNotificationChannelCommandInput`, `IStudy`, `xLuceneVariables`, `EntityHydrator`, `NumberValidator`, `UiKit.BlockContext`, `SPADeploy`, `TSelectedItem`, `TagService`, `CustomEvent`, `TSunCardConfig`, `MouseEventToPrevent`, `ShaderType`, `RebootDBInstanceCommandInput`, `IPushable`, `Sequence`, `DeleteSourceServerCommandInput`, `uproxy_core_api.CreateInviteArgs`, `CursorPosition`, `ClassDecorator`, `DrawState`, `DisposableSet`, `CommandPacker`, `RGBValue`, `DataGrid`, `UrbitVisorState`, `TGroupBy`, `Validate`, `FaunaRoleOptions`, `LoadBalancer`, `GraphMode`, `RecordsQuery`, `NavigationScreenProp`, `Milliseconds`, `ValidatedJobConfig`, `AccountAndPubkey`, `IInterpolatedQuery`, `AbiCoder`, `Api`, `MockTask`, `ConvertOptions`, `Remote`, `Origin`, `FilterOptions`, `EnvironmentAliasProps`, `ts.VariableDeclarationList`, `EdmxEnumType`, `PackageExpanded`, `WorkspaceSnaphot`, `ShoppingCartItem`, `PLSQLRoot`, `ParquetWriterOptions`, `FileSpan`, `ProjectData`, `io.IOHandler`, `INgWidgetContainerRawPosition`, `ClassAndStylePlayerBuilder`, `ENDDirective`, `ListRulesCommandInput`, `TreeStateObject`, `sdk.Recognizer`, `Errback`, `CardComponent`, `LiteralContext`, `Messages.BpmnEvents.TerminateEndEventReachedMessage`, `ProjectRiskViewEntry`, `ReporterConfig`, `Client`, `PathUpdater`, `FocusType`, `NestExpressApplication`, `HK`, `ActiveWindow`, `AppInstanceEventType`, `LanguageEffects`, `FetchListOptions`, `CollectionDependencyManifest`, `InitializerMetadata`, `ContainerNode`, `WalletI`, `LocationOffset`, `ChoiceValue`, `WFWorkflowAction`, `requests.ListBastionsRequest`, `TestSuiteInstance`, `IpcService`, `PluginResultData`, `NgModuleDefinition`, `LoaderEvent`, `ReactDataGridColumn`, `NetworkgraphPoint`, `TestScriptError`, `Commit`, `DeviceID`, `IScribe`, `BenchmarkData`, `AnyObjectNode`, `MDCChipActionType`, `MessengerTypes.PsidOrRecipient`, `ResolvedConfiguration`, `Lexer`, `IOptionTypes`, `IVariableDefinition`, `FirebaseDatabaseService`, `KubeConfig`, `Tangent`, `IChainableEvent`, `Events.pointerdragstart`, `EquipmentStatus`, `LeftObstacleSide`, `user`, `postcss.Declaration`, `SocketIO.Server`, `ScaleGroup`, `ChildRuleCondition`, `PolygonProps`, `InitializeStateAction`, `SapphireDbOptions`, `IClusterHealthChunkQueryDescription`, `d.FsItems`, `ClientInstance`, `LSTMCellLayerArgs`, `WaveType`, `FeatureInterface`, `BoundSphere`, `MonitoringConfig`, `ITypeFactory`, `ShippingService`, `TransformConfigUnion`, `Hsva`, `ICommandResult`, `UpdateCategoryDto`, `IGetTimeLimitReportInput`, `apid.ChannelId`, `WriterToString`, `StartMeetingTranscriptionCommandInput`, `IEntryDefinition`, `RemoteSeries`, `PointGeometry`, `ProgressBarService`, `StackPath`, `ColliderShape`, `ParsedDID`, `TagsProps`, `ts.ImportEqualsDeclaration`, `IAdjacencyCost`, `CollaborationWindow`, `DesignerNodeConn`, `OverlayReference`, `IProjectCard`, `ApplicationCloseFrame`, `SourceCodeLocation`, `FunctionCallContext`, `Encoding`, `DndEvent`, `LoaderFunction`, `FoldingRange`, `DropDown`, `Body2DSW`, `IRequestApprovalCreateInput`, `Separator`, `AcLayerComponent`, `WorkflowStepInputModel`, `CosmosdbAccount`, `FilesystemProvider`, `TabEntity`, `DIRECTION`, `IntervalTimelineEvent`, `WaitForEvent`, `PaymentChannelJSON`, `TransactionBuilder`, `IClusterDefinition`, `JsonRpcRequestPayload`, `requests.ListReplicationSourcesRequest`, `TimelineTrackSpecification`, `AggsCommonSetupDependencies`, `SerializedNodeWithId`, `CasesClient`, `Json`, `MatDatepickerIntl`, `Fetch`, `HTMLAudioElement`, `StakingTransactionList`, `ModelArtifacts`, `SerialAPICommandMachineParams`, `DataFrameAnalyticsListRow`, `UpdateChannelParams`, `ShaderVariant`, `AnimatedSprite`, `SvgPoint`, `TcpConnection`, `TransformCallback`, `DatabaseInterface`, `AGG_TYPE`, `BookingState`, `OotOnlineStorage`, `IFreestylerStyles`, `SCClientSocket`, `EntityState`, `ExternalProps`, `mergeFunc`, `MutableTreeModel`, `AnnotationLineProps`, `Allowance`, `SVGFilterElement`, `CrossProductNode`, `DescribeVolumesCommandInput`, `TrieMap`, `SubgraphDeploymentID`, `JoinerCore`, `DialogConfig`, `ParsedCssFile`, `ILease`, `TreemapSeriesNodeItemOption`, `TestDuplex`, `AiService`, `IIonicVersion`, `Models.OrderStatusReport`, `FieldUpdates`, `MUserId`, `AztecCode`, `MatchmakerTicket`, `IBatch`, `RTCPeer`, `FoldCb`, `Measurable`, `LoginPage`, `MoveT`, `ReleaseResource`, `SingleKeyRangeSet`, `CategoriesState`, `AboveBelow`, `MockCloudExecutable`, `ParsedMessagePartPlaceholder`, `ConfigurationContext`, `ParserService`, `YggdrasilAuthAPI`, `ParsedJSXElement`, `IP`, `NgxMdService`, `SettingNavigation`, `MediaDiagnosticChangedEventArgs`, `ExampleProps`, `ResponderHelper`, `InspectTreeResult`, `parseXml.Element`, `InputType.StyleAttributes`, `PlotConfigObject`, `Jwk`, `ESAssetMetadata`, `ELang`, `MapsVM`, `SafeAny`, `StateCreator`, `MultiSelectRenderProps`, `EventFactory`, `GetStaticProps`, `HttpWrapper`, `Scene3D`, `meow.Result`, `ListPingMonitorsRequest`, `Spaces`, `NullConsole`, `SqlStatisticsTimeSeries`, `FlattenedFunnelStep`, `Projects`, `SFAAnimationController`, `ITypedNode`, `IArticle`, `TPagedList`, `LabelValuePair`, `IMiddlewareProvider`, `IDateFnsLocaleValues`, `ObservableQueryValidatorsInner`, `CardAndModule`, `EslingPlugin`, `URI`, `RawShaderMaterial`, `CreateTokens`, `CompositeTraceData`, `WithBigInt`, `FormGroupState`, `GetServiceRoleForAccountCommandInput`, `MetricAggParam`, `AADResource`, `TsChipComponent`, `EchartsTimeseriesChartProps`, `V1PodList`, `AnyConfigurationSchemaType`, `DecimalFormatter`, `UserManagementService`, `cc.RenderTexture`, `RouteComponent`, `ParticipantsRemovedListener`, `NewMsgData`, `PureReducer`, `Handlebars.HelperDelegate`, `Requests`, `GlobalGravityObj`, `Radio`, `MerkleProof`, `MapPolygonSeries`, `IRChart`, `MessengerTypes.SendOption`, `ReadonlyNonEmptyArray`, `WalkerCallback`, `MetaKey`, `ChemControllerState`, `Modules`, `ComponentCompilerData`, `LinesTextDocument`, `AgCartesianChartOptions`, `Candle`, `TasksService`, `OperatorAsyncFunction`, `QueryHookOptions`, `CompilerOperation`, `Follower`, `WebResource`, `SubmissionServiceStub`, `AxisLabelOptions`, `TextAnimationRefs`, `LegacyService`, `EzRules`, `GenericConstructor`, `IEntityMetaOptions`, `BrowserHelper`, `ConfigDeprecation`, `UhkBuffer`, `Events.collisionstart`, `CommentResponse`, `RenderElementProps`, `instantiation.IConstructorSignature7`, `IDownloadOptions`, `API.storage.IPrefBranch`, `TaskParameter`, `CardView`, `ICredential`, `UploadedVideoFileOption`, `ApiCallByIdProps`, `PortingProjects`, `IGraphicOption`, `LogChildItem`, `DeletePermissionPolicyCommandInput`, `FactoryContextDefinition`, `SelectionSet`, `ObjectType`, `EveesMutation`, `IBoot`, `LayoutStyleProps`, `CompilerState`, `TabData`, `CoExpNum`, `NextFn`, `SecurityUtilsPlugin`, `GlobalMaxPooling2D`, `IChangeHandler`, `CurrencyMegaOptions`, `TPageWithLayout`, `PrimitiveFixture`, `Delta`, `MobileCheckPipe`, `TransformerStep`, `IStateTreeNode`, `ParsedResponse`, `FileChangeEvent`, `child.ExecException`, `CategoryPage`, `CalculatedColumn`, `providers.TransactionResponse`, `QueryCacheResult`, `GetAppInstanceRetentionSettingsCommandInput`, `XYArgs`, `GitHubService`, `requests.ListWafTrafficRequest`, `LatLng`, `SceneActuatorConfigurationCCReport`, `ResolveNameByValue`, `ConnectionHealthPolicyConfiguration`, `ModuleManager`, `IAPIFullRepository`, `UserRegisterData`, `BlogPostService`, `Line3`, `CanvasFontFamilies`, `SortedQuery`, `Stringifier`, `WebSocketEventListener`, `MemberAccessFlags`, `Applicative1`, `StageInterview`, `TabbedTable`, `ActionConfig`, `TraderWorker`, `Subscribe`, `ElectronLog`, `ProjectEntity`, `ComponentRegister`, `NoteCollectionService`, `PDFNumber`, `CreateBackupCommandInput`, `DisplayState`, `EventBinderService`, `LevelService`, `UpdateEvent`, `GetIn`, `SplittedPath`, `requests.ListSnapshotsRequest`, `PackageLockPackage`, `typedb.DBType`, `ZoneChangeOrderModel`, `OrbitCoordinates`, `NormalizedFormat`, `MaybeNestedArray`, `d.SitemapXmpResults`, `DataviewSettings`, `BoxObject`, `CanvasRenderer`, `IKeyPair`, `ListAst`, `IJWTPayload`, `NormalizeStyles`, `MaskServer`, `ExampleSourceFile`, `RestoreDBClusterFromSnapshotCommandInput`, `CounterfactualEvent`, `ApiErrorMessage`, `FormatProps`, `GraphPartitioning`, `DatasetManager`, `SwimlaneRecordPayload`, `TxLike`, `EventEmitter.ListenerFn`, `DescribeAlarmsCommandInput`, `AthleteSettingsModel`, `ServiceQuotas`, `DynamoDB.BatchWriteItemInput`, `UseSelectStateOutput`, `JPARandom`, `SQL`, `FlowLabel`, `TileCoords2D`, `InternalErrorException`, `CustomIntegrationsPluginSetup`, `CssClass`, `ITransitionPage`, `IFieldExpression`, `ProviderInfrastructure`, `RequestModel`, `Zeros`, `SyntaxKind`, `NodeModuleWithCompile`, `FzfResultItem`, `ITableAtom`, `OperationContract`, `DelayLoadedTreeNodeItem`, `DataTypesInput.Struct2Struct`, `PullRequestNumber`, `ModuleSpecifierResolutionHost`, `Opcode`, `ViewerRenderInput`, `CompilerEventFsChange`, `ITestAppInterface`, `ProfileModel`, `InvalidateAPIKeyResult`, `Contexts`, `ModelService`, `LegendPath`, `RuntimeTreeItem`, `IndexPatternsServiceDeps`, `IChannelModel`, `ReadonlyObjectKeyMap`, `Dialogue`, `Viewport`, `RegisteredServiceUsernameAttributeProvider`, `LookupKey`, `KeyboardListener`, `PrimitiveType`, `ParameterStructures`, `FormfieldBase`, `ExecutedQuery`, `SprintfArgument`, `ReflectionProbe`, `QueryConfig`, `Zone`, `AzExtTreeItem`, `WebhookClient`, `TwingFunction`, `PatternMappingEntryNode`, `KintoClientBase`, `LinearRegressionResult`, `RedisClientType`, `CaseConnector`, `OperatingSystem.macOS`, `Vec4Term`, `MessageSentListener`, `Mat4`, `RecurringCharge`, `ListTemplateVersionsCommandInput`, `CustomFunction`, `WexBimRegion`, `TimesheetFilterService`, `ActiveQuery`, `NameT`, `BeforeUnloadEvent`, `QueuedEvent`, `VerificationToken`, `FormInstance`, `LogData`, `ImagePicker`, `ts.ExportDeclaration`, `reduxLib.IUseAPIExtra`, `UpdateApplicationDetails`, `ValuesMap`, `IOrganizationProjectsFindInput`, `React.DragEventHandler`, `IVectorSource`, `ForkName`, `PageContainer`, `MdcChip`, `ArgsMap`, `CreditWords`, `IServerSideGetRowsParams`, `CachedBreakpoint`, `d3Transition.Transition`, `SharedFileMetadata`, `AstModule`, `TestingGroup`, `TransitionProps`, `EditorInterface`, `ExecutionActivity`, `Cardinality`, `GroupActorType`, `ModeManager`, `Not`, `IPersonDetails`, `ICertificate`, `FSNetwork`, `DescribeDatasetGroupCommandInput`, `vscode.CompletionItem`, `PatternAtomNode`, `SerializedValue`, `Shim`, `IMessage`, `HammerInstance`, `ViewerPreferences`, `GroupByColumn`, `Path8`, `CacheInstance`, `GqlContext`, `TreeCursor`, `SpreadAssignment`, `MockSdkProvider`, `ElementAnimateConfig`, `InterfaceWithDeclaration`, `IWorld`, `ExecuteOptions`, `IdentityService`, `SoFetchResponse`, `Song`, `TransportResult`, `NoiseSocket`, `IOrderedGroup`, `React.ElementType`, `GeometrySector`, `MiStageState`, `SocketType`, `CreateManyDto`, `THREE.Color`, `DocumentRegistry`, `SubscriptionsClient`, `PortRecordMap`, `StoreContext`, `telemetry.Properties`, `AssetOptions`, `DeployingWallet`, `ProjectSpecBase`, `DFChatArchiveEntry`, `requests.ListAutonomousContainerDatabasesRequest`, `DriverSurface`, `BlockNumberPromise`, `HTMLSlotElement`, `MenuI`, `UpdateContext`, `SBGClient`, `BlockCipher`, `OrderedDictionary`, `d.NodeMap`, `EventInteractionState`, `Computation`, `Angulartics2AppInsights`, `CanaryConfig`, `PackageToPackageAnalysisResult`, `IResizeInfo`, `TemplateStore`, `ICoinProtocol`, `JsonLdDocumentProcessingContext`, `AlgoFn`, `Classify`, `FaastModuleProxy`, `MigrateEngineLogLine`, `TabDataType`, `RegisteredModule`, `CustomGradientFunc`, `FormPage`, `ExpressRouteCircuit`, `Realm`, `TabPane`, `t.Transformed`, `ColumnSummary`, `StageDataHolder`, `GetReviewersStatisticsCollectionPayload`, `SimpleLinkedTransferAppState`, `NamedExportBindings`, `ConnectionType`, `Backbone.ObjectHash`, `Flo.ElementMetadata`, `FileState`, `RequestDetailsState`, `ControlCenterCommand`, `EmbeddableSetupDependencies`, `ValidResource`, `WithEnum`, `DefaultTreeElement`, `CrudRequestOptions`, `VisTypeAliasRegistry`, `EndpointClass`, `keyboardKey`, `WindowOptions`, `CommandLineOptionOfListType`, `BaseEditor`, `DomainEvent`, `DataUp`, `AstLocation`, `CalendarHeatmapData`, `AbstractSqlModel`, `RuntimeConfiguration`, `IChainConfig`, `ServicePort`, `CredentialService`, `ResetButtonProps`, `LoggerInstance`, `ts.LanguageService`, `Delayed`, `SaveOptions`, `ExpressionContext`, `IfStatementContext`, `MarkdownSerializerState`, `MUserAccountId`, `Itinerary`, `BaseFrame`, `IAngularEvent`, `RequiredAsset`, `ClientRequestSucceededEventArgs`, `ServerHelloVerifyRequest`, `MakeRestoreBackup`, `LeakyReLU`, `CCResponsePredicate`, `MaskListProps`, `ObservableUserStore`, `GeneratePipelineArgs`, `WriteTransactionRequest`, `HelpfulIterator`, `ToastController`, `EncryptOptions`, `QuestWithMetadata`, `TagnameValue`, `OrganizationPolicyType`, `UsersDetailPage`, `CopyrightInfo`, `MarkSpecOverride`, `Home`, `WorkItemService`, `UntypedProductSet`, `DecodeInfo`, `Natural`, `IProduct`, `IExperiment`, `TargetResponderRecipe`, `Primary`, `IFluidDataStoreFactory`, `React.ComponentProps`, `ResultWithType`, `ObservableApplicationContextFactory`, `MergeDomainsFn`, `TypedPropertyDescriptor`, `ConfigurationScope`, `SceneBinObj`, `AccountAssetDTO`, `IUserModelData`, `JsonObjectProperty`, `HttpAdapterHost`, `AdaptServer`, `WalletBalance`, `requests.ListModelDeploymentsRequest`, `CustomField`, `SettlementEncoder`, `TileProps`, `loader.LoaderContext`, `AxisSpec`, `AxisLabel`, `CosmosClient`, `MultiPickerOption`, `WidgetOptions`, `Then`, `ModifyEventSubscriptionMessage`, `IORedisInstrumentationConfig`, `SceneManager`, `PearlDiverSearchStates`, `BitmapData`, `Body`, `ReboostInstance`, `MDL0Renderer`, `U8.U8Archive`, `ChatParticipant`, `ActionProps`, `chokidar.FSWatcher`, `TupletNumber`, `UsageCollector`, `android.graphics.Typeface`, `FlexProps`, `BufferUseEnum`, `ClippedPolyfaceBuilders`, `AnchoredOperationModel`, `FeeLevel.Medium`, `QuerySuggestion`, `RepositoryData`, `ConfigChecker`, `MappedSetting`, `pxtc.CallInfo`, `DocParagraph`, `PreferenceScope`, `SnapshotDb`, `BuilderRun`, `NextHandleFunction`, `ParsedLog`, `Num`, `SavedObjectMigrationFn`, `RetryHandler`, `ReduceOptions`, `Layer`, `NavigationState`, `GuildService`, `StepRecoveryObject`, `CardService`, `ActionWithPayload`, `CheckBox`, `ProxyValue`, `PostComment`, `TileState`, `STPPaymentIntent`, `WorldReader`, `FramePublicAPI`, `ValidationEventTypes`, `ICompositionBody`, `GenericList`, `Repo`, `BatchedFunc`, `IDeployedApplicationHealthStateChunk`, `RepositoryEntity`, `JsonDocsComponent`, `tf.GradSaveFunc`, `MosaicPath`, `ChannelLeave`, `AnimatorState`, `HistoryStatus`, `ClassNode`, `NodeParser`, `MdcFormField`, `OrganizationEntity`, `PublishedStoreItem`, `d.CopyTask`, `IBuildStageContext`, `HttpError`, `child.ChildProcess`, `theia.Command`, `ITerminalOptions`, `DragEvent`, `BaseArrayClass`, `ParentSpanPluginArgs`, `ImportNameInfo`, `DeleteBucketPolicyCommandInput`, `ContractWrapper`, `PrettySubstitution`, `ManagedShortTermRetentionPolicyName`, `LevelGlobals`, `GalaxyMapIconStatus`, `EvaluateHandleFn`, `PermutationVector`, `ConfiguredPlugins`, `Declarations`, `NestedMap`, `MultiValue`, `PrismaConfig`, `NetworkId`, `FindArgs`, `AccountCustom_VarsEntry`, `ImageFov`, `StoreNames`, `VaultVersion`, `android.view.MotionEvent`, `ProtocolClient`, `ArrayConfig`, `BuildOutput`, `StateTimelineEvent`, `FormArray`, `NavigatedData`, `StripeShippingMethod`, `UpdateSettingModelPayload`, `Food`, `JSDocTypeLiteral`, `Imported`, `DeleteCertificateResponse`, `IStageConfigProps`, `FieldResultSetting`, `XPCOM.nsIURI`, `HalOptions`, `NoteModel`, `CalcAnimType`, `requests.ListShapesRequest`, `Sha256`, `UpdateJobCommandInput`, `SessionConfig`, `DraggableDirective`, `EdgeDisplayData`, `MergeBlock`, `ExecutionParams`, `Async`, `RequestCallback`, `GithubUser`, `AllowedKeyEntropyBits`, `CallableContext`, `EntityCollectionResolver`, `MaximizePVService`, `ImmutableStyleMap`, `TargetStr`, `SimpleChange`, `TextInsertion`, `DeviceSelection`, `SequenceConfiguration`, `LLRBNode`, `ISPRequestOptions`, `MenuType`, `ng.IScope`, `FeatureAppearance`, `GuidString`, `IScriptInfo`, `MaterialInstance`, `TheBasicMQStack`, `CkbMintRecord`, `LoginSuccess`, `ScriptObject`, `StoredCiphertext`, `EditorMode`, `Zip`, `SwitcherFields`, `PollingInterval`, `ListType`, `User`, `CheckAvailabilityProps`, `LookaroundAssertion`, `JsonTokenizer`, `FileDataSource`, `CallInfo`, `AxisLabelCircular`, `FbFormModelField`, `ComponentProperty`, `GetUpgradeStatusCommandInput`, `ExtendedAdapter`, `TxIn`, `RootStoreState`, `RadioValue`, `InstanceResult`, `STXPostCondition`, `TinymathAST`, `ParsedRange`, `MIRAssembly`, `Aspect`, `AdditionEdit`, `GroupParameters`, `PatternLayout`, `ConcreteComponent`, `IPropertyData`, `UpdateDeviceCommandInput`, `Concatenate`, `Second`, `JSX.HTMLAttributes`, `CustomHTMLElement`, `TPayload`, `DataTypeNoArgs`, `StorageHeader`, `DiagnosticCollection`, `ViewMode`, `ValidityState`, `FontData`, `VanessaTabItem`, `Observed`, `Registerable`, `GfxTextureDimension`, `CredentialOfferTemplate`, `HandlerCallback`, `RendererLike`, `EsDataTypeUnion`, `IStoreData`, `Rounding`, `Configuration`, `UpdatePackageCommandInput`, `WithBoolean`, `TT`, `StandardizedFilePath`, `ProtocolRunner`, `HotkeySetting`, `d.TransformOptions`, `EngineerSchema`, `StoryProps`, `UserLoginData`, `PointerDownOutsideEvent`, `OsdUrlTracker`, `MigrateCommand`, `CommandPayload`, `MenuItemProps`, `MyCompanyConfig`, `MIRFunctionParameter`, `HeaderGroup`, `AthenaRequestConfig`, `GraphQLResolverMap`, `IModalServiceInstance`, `PaymentsErrorCode`, `AuthError`, `CommandArg`, `Slots`, `AddressType`, `JKRCompressionType`, `StartDeploymentCommandInput`, `MainAccessResponse`, `CompositeType`, `BisenetV2CelebAMaskOperatipnParams`, `S3.GetObjectRequest`, `CoralContext`, `d.ListenOptions`, `GraphQLNonInputType`, `IExtensionMessage`, `InterfaceDefinitionBlock`, `ValueParserParams`, `ManagedID`, `RedBlackTreeIterator`, `WindowLike`, `TSESTree.ClassDeclaration`, `ParquetBuffer`, `TableBuilder`, `Ex`, `ITelemetryGenericEvent`, `ConventionalCommit`, `NetworkParams`, `RectF`, `Preparation`, `Apollo.Apollo`, `LanguageServiceExtension`, `ReportingEventLogger`, `MetaFunction`, `IOSNotificationPermissions`, `StructureTypeRaw`, `RewardTransactionList`, `RangeBucketAggDependencies`, `OtherNotation`, `Searchable`, `FormSection`, `MyObserver`, `IsBindingBehavior`, `DebugProtocol.SetVariableArguments`, `Enhancer`, `Transport`, `OrthogonalArgs`, `keyType`, `IUriMap`, `ButtonProps`, `TsExpansionPanelComponent`, `FnO`, `Models.WebHook`, `StartImportCommandInput`, `ReknownClient`, `Pbf`, `ChartLine`, `ConnectionState`, `TemplateTermDecl`, `Accessibility`, `GoalService`, `angular.ITimeoutService`, `IArrayType`, `AccessibilityKeyHandlers`, `MagicSDKWarning`, `SessionDataResource`, `NFT1155V2`, `NumericArray`, `requests.ListWhitelistsRequest`, `phase0.BeaconBlockHeader`, `WebViewMessageEvent`, `tf.Tensor5D`, `CachedType`, `OsmNode`, `CreateTransformer`, `SQLDatabase`, `StatusBarWidgetControl`, `IframeController`, `StoreCollection`, `SeriesMarkerRendererDataItem`, `ITenantSetting`, `BalmError`, `AllPlatforms`, `RailsFile`, `DataColumn`, `SpriteManager`, `CrudTestContext`, `Newline`, `GlobalConfiguration`, `SyslumennAuction`, `IFormatProvider`, `CompleteOption`, `SelectionSetToObject`, `WavyLine`, `ActivableKey`, `AbstractClass`, `Mouse`, `CloseEditor`, `ProgramProvider`, `VECTOR_STYLES`, `VerticalRenderRange`, `KeyIndexImpl`, `Dispatcher`, `E2EElement`, `AdminCacheData`, `MdcDialogRef`, `TypeSpec`, `DataCache`, `BBox`, `TargetConfig`, `vscode.OutputChannel`, `WhileStatement`, `TypeDBClusterOptions`, `AnyExpressionFunctionDefinition`, `BatchChangeSet`, `DockType`, `PushDownOperation`, `CacheVm`, `RepositoryInfo`, `Rgba`, `ISpec`, `IVersionedValueWithEpoch`, `TinaCMS`, `FormFieldErrorComponent`, `ServiceTemplate`, `StageName`, `ScriptParameter`, `TimeSpan`, `PickerOptions`, `YAMLNode`, `google.maps.Polygon`, `IAjaxSuccess`, `kind`, `ComponentPublicInstance`, `ParameterScope`, `Album`, `TestConfiguration`, `ActorContext`, `columnTypes`, `FileSystemFileHandle`, `ListComponent`, `ProgressReporter`, `BarLineChartBase`, `EngineWindow`, `Week`, `Bitmap`, `PokemonIdent`, `MinimongoDb`, `sdk.TranslationRecognitionCanceledEventArgs`, `RichLedgerRequest`, `VanessaGherkinProvider`, `BaseResourceHandlerRequest`, `ClientRemote`, `DaffAddressFactory`, `Parsed`, `StatusMessage`, `OutOfProcessStringReader`, `DragHelperTemplate`, `UriCommandHandler`, `DAL.KEY_5`, `AppStackMajorVersion`, `CategoryList`, `RendererFactory2`, `TradeDirection`, `IModdleElement`, `ExtHandler`, `DescribeNamespaceCommandInput`, `IFoundCursor`, `AutoInstrumentationOptions`, `ComponentCompilerLegacyContext`, `requests.ListDbSystemsRequest`, `FunctionOrConstructorTypeNode`, `PathHeadersMap`, `ComponentCompilerEvent`, `PieChartData`, `StageContentLayoutProps`, `InitializeHandlerOptions`, `CephLandmark`, `RosException`, `ts.NavigationTree`, `ListsSandbox`, `UpdateFileSystemCommandInput`, `RequestedCredentials`, `EventRequest`, `CommandService`, `AnimatorPlayState`, `IgetOpenRequests`, `EventResult`, `MediaRecorder`, `UrlSegment`, `JsDoc`, `TydomDeviceSecuritySystemData`, `Quality`, `ConfigParser`, `ManipulatorCallback`, `StringValueToken`, `TableDefinition`, `ConfirmDialogProps`, `SliceNode`, `ParseScope`, `ExecOpts`, `UniswapFixture`, `CheckFlags`, `ContactId`, `Json.ArrayValue`, `StyleRule`, `NodeRequest`, `TileIndex`, `SupClient.AssetSubscriber`, `StorageError`, `StepType`, `VariableNode`, `ConnectivityInfo`, `MessageFormatter`, `RNCookies`, `MdcTopAppBar`, `ProjectSettings`, `RxTranslation`, `DeleteProjectCommandOutput`, `Wire`, `ValidationVisOptionsProps`, `Enums`, `DelegateTransactionUnsigned`, `ActiveContext`, `FrameworkType`, `DebugConfigurationModel`, `MigrationDefinition`, `ISnapshotTreeWithBlobContents`, `GetInsightsCommandInput`, `KeyFunction`, `BlockTag`, `PermissionOverwriteResolvable`, `TypeMatcher`, `ApiDefService`, `MarginPoolInstance`, `DataViewBase`, `RelativePosition`, `IntentSummary`, `RendererContext`, `WordType`, `GLboolean`, `Serials`, `HintItem`, `requests.ListComputeImageCapabilitySchemasRequest`, `TransitionAnimation`, `BrowserError`, `ThunkArg`, `RepeatVector`, `IParserState`, `GuildMessage`, `GetThunkAPI`, `IPageContainer`, `TinaCloudCollection`, `DynamoDbDataSource`, `MapFn`, `PiEditUnit`, `ArrayType`, `ForkTsCheckerWebpackPluginState`, `ServiceWorker`, `FlattenContext`, `Opts`, `MethodGetRemainingTime`, `messages.SourceReference`, `Completion.Item`, `msRest.OperationURLParameter`, `StatsError`, `RPCMethodDescriptor`, `child_process.SpawnOptions`, `ZWaveLogInfo`, `BaseAuthState`, `FocusTrapFactory`, `WorkArea`, `XRView`, `IpcMainListener`, `ShippingAddress`, `BagOfCurves`, `VariableValueTypes`, `SourceBufferKey`, `FrameControlFactory`, `StubStats`, `ADialog`, `PersistenceProvider`, `ReferencedFields`, `FieldsAndMethodForPositionBeforeCurrentStrategy`, `RoomEntity`, `VectorPosition`, `AppElement`, `DSL`, `UpdateRegexPatternSetCommandInput`, `HTMLFrameElement`, `ICellRenderer`, `JestPlaywrightConfig`, `EditText`, `MatBottomSheetRef`, `APIs`, `StreamReport`, `Column`, `RNSharedElementStyle`, `ListGatewaysCommandInput`, `InspectionTimeRange`, `ObservableQueryBalanceInner`, `IconShapeTuple`, `RawRule`, `RobotApiResponse`, `LoggingConfiguration`, `TaskInfo`, `GX.PostTexGenMatrix`, `NotifyMessageType`, `CommandOption`, `CdsControl`, `HelloService`, `TSReturn`, `SiteInfo`, `CustomSmtpService`, `AggParamsItem`, `RecursiveArray`, `IHook`, `AssociateServiceRoleToAccountCommandInput`, `PayloadHandler`, `SwUrlService`, `RegularPacket`, `Templates`, `NodeJS.ReadWriteStream`, `IGlTFModel`, `JwtAdapter`, `ParseResponse`, `ProjectIdAndToken`, `SExpr`, `PainterElement`, `UseHydrateCacheOptions`, `EntityModel`, `Listener`, `DirectiveArgs`, `GlobalStoreDict`, `MatchPresenceEvent`, `MeshPrimitive`, `NexusObjectTypeDef`, `DeferredNode`, `AdminProps`, `IColumnDesc`, `IResolveDeclarationReferenceResult`, `ITouchEvent`, `ImageMapperProps`, `TRPCResult`, `requests.ListManagementAgentsRequest`, `OrderedStringSet`, `GraphInputs`, `CacheIndex`, `EntryHashB64`, `WatchStopHandle`, `IResizedProps`, `SimpleBBox`, `ODataParameterParser`, `RepositoryChangeEvent`, `RNode`, `BaseMemory`, `IFileSnapshot`, `AlarmAction`, `SelectableObject`, `SceneGfx`, `DynamicFormArrayGroupModel`, `JobDetails`, `u64spill`, `S.State`, `TemplateRef`, `PeerTreeItem`, `TalkOpenChannel`, `ImportDefaultSpecifier`, `Encounter`, `messages.DocString`, `GDQLowerthirdNameplateElement`, `RebaseConflictState`, `RepositoriesStore`, `RollupConfig`, `Mocks`, `ImageDecoder`, `StreamInfo`, `SitemapXmpResults`, `NodeKey`, `IAppStore`, `Signals`, `RouteModules`, `ExpNumSymbol`, `RepoSnapshot`, `SCN0_AmbLight`, `CacheTransaction`, `Suggestions`, `IOrganizationDocument`, `CurrentUserService`, `Geography`, `NotificationAction`, `ModelDeploymentType`, `ServerConnection.ISettings`, `StringLookup`, `GameBits`, `express.Express`, `WaitForScript`, `GetNamespaceResponse`, `ContainerInspectInfo`, `CounterState`, `StateDefinition`, `WebApi`, `DefaultFilterEnum`, `P2SVpnGateway`, `BucketAggType`, `ITimelineModel`, `MeshPhysicalMaterial`, `DatabaseVulnerabilityAssessment`, `EntityT`, `LinkedService`, `Requirement`, `ICSR`, `SoftVis3dShape`, `DiceRollerPlugin`, `SSHConfig`, `number`, `Hapi.ResponseToolkit`, `NgxDateFnsDateAdapter`, `PeekZFrame`, `d.ExternalStyleCompiler`, `SlotFilter`, `ReadContext`, `OAuthConfigType`, `ApplicationWindow`, `ExportSummary`, `TestNodeProvider`, `EntityRepository`, `BottomTabBarProps`, `MediaDevices`, `TextDocumentShowOptions`, `ToolbarProps`, `Base16Theme`, `LocalMicroEnvironment`, `AppBarProps`, `ComponentState`, `ODataQueryOptionsHandler`, `ts.FormatCodeSettings`, `schema.Specification`, `ObjectSelectionListState`, `WalletConnectConnector`, `IUsageMap`, `CutLoop`, `CardData`, `TransposedArray`, `TaskScheduling`, `DecoratedComponentClass`, `CreateModelResponse`, `IParameterValuesSourceProvider`, `E2EScanScenarioDefinition`, `WalletEntry`, `TypeEquality`, `InMemoryOverlayUrlLoader`, `ModuleStoreSettings`, `SFCDeclProps`, `BridgingPeerConnection`, `DeleteRepositoryCommand`, `CipherImportContext`, `ScoreService`, `ColumnId`, `CssItem`, `RestClient`, `StoryFn`, `d.ComponentOptions`, `StatBuff`, `TaskLogger`, `IQueryParamsResult`, `LiskErrorObject`, `google.maps.LatLng`, `Highcharts.DataGroupingApproximationsArray`, `MatchPathAsyncCallback`, `ITargetGroup`, `SandDance.VegaDeckGl.types.LumaBase`, `ScreenContextData`, `WebappClient`, `ImageObject`, `ModifyEventSubscriptionCommandInput`, `IRenderTask`, `CalculationYear`, `UpdateQuery`, `MicrosoftSynapseWorkspacesResources`, `ProposeCredentialMessage`, `UnregisterCallback`, `IModifierKeys`, `Parser.Infallible`, `BTCAccountPath`, `RouteAnimationType`, `CommanderOptionParams`, `TimeOffRequest`, `GameOptions`, `TerminalWidget`, `Submission`, `CSSSelector`, `LaxString`, `TranslationEntry`, `requests.ListBackupDestinationRequest`, `NodeEntry`, `HttpResponseOK`, `DownloadService`, `SpringConfig`, `ThunkDispatch`, `RPCMessage`, `TokenType`, `AlertMessage`, `MediaConfig`, `td.SMap`, `ExpressionModel`, `IAheadBehind`, `LoggerProvider`, `CPU`, `ClassExpression`, `EntityContainer`, `ws`, `DeleteSiteCommandInput`, `DialogComponent`, `ProfileServiceProxy`, `FilesystemNode`, `Couple`, `GeoJSON.Feature`, `CreateProgram`, `TFJSBinding`, `requests.ListExternalPluggableDatabasesRequest`, `RawPermissionOverwriteData`, `VarExpr`, `TableBatchSerialization`, `JsonWebKey`, `LVal`, `AnyArray`, `EventListenerOptions`, `JSXIdentifier`, `ErrorSubscriptionEvent`, `MessageTarget`, `lua_State`, `DesktopCapturerSources`, `ParsedConfirmedTransaction`, `winston.Logger`, `TrackParseInfo`, `DataProperty`, `NumOrString`, `ActivityInterface`, `ast.BinaryNode`, `HierarchyRpcRequestOptions`, `Concat`, `CreatePolicyVersionCommandInput`, `Day`, `CrochetRelation`, `VertexFormat`, `Elements.RichTextElement`, `ContextMenuItem`, `ts.FunctionExpression`, `NotificationSettings`, `Models.Exchange`, `IHistoryFileProperties`, `Notified`, `ExportedDeclarations`, `RenderableElement`, `FullIndexInfo`, `CallbackT`, `UsageStats`, `ExtraSessionInfoOptions`, `EntityConstructor`, `ChildInfo`, `JsonType`, `MgtFileUploadItem`, `Pager`, `NVMEntry`, `TransformStream`, `DistrictsGeoJSON`, `ReflectiveKey`, `Writeable`, `CourseType`, `HydrateStyleElement`, `GetPolicyRequest`, `CkElementContainer`, `TestInterface`, `ITaskSource`, `IStatusFile`, `AuditEvent`, `RecordingOptions`, `core.BTCInputScriptType`, `Events.postdebugdraw`, `SuperClient`, `MongooseModel`, `GradSaveFunc`, `SchemePermissions`, `LibResolver`, `ExploreOptions`, `RPCDescriptor`, `MatBottomSheetConfig`, `MockDialogRef`, `ObjectStorageClient`, `ERC20Mock`, `TwistyPlayerModel`, `Gender`, `SchemaDef`, `Cartesian3`, `SessionGetter`, `WsConnectionState`, `GitFileStatus`, `LineAnnotationStyle`, `CBPeripheral`, `VisualObjectInstanceEnumeration`, `ChartModel`, `ProxyServerType`, `ng.ILocationService`, `ShipBlock`, `DDL2.OutputDict`, `CreateRequestBuilder`, `PathAddress`, `IAvatarProps`, `ChannelData`, `CityPickerColumn`, `DeletedAppRestoreRequest`, `AngularFireAuth`, `LocalStorageService`, `OpenYoloCredentialRequestOptions`, `FilesState`, `BuildrootAction`, `HttpsFunction`, `ICore`, `ScatterSeries`, `ReuseItem`, `React.ChangeEventHandler`, `Cookies`, `ServiceControlPolicyResource`, `GitPullRequestWithStatuses`, `BatchNormalizationLayerArgs`, `IStashTab`, `CryptoFactory`, `PartyMatchmakerAdd_NumericPropertiesEntry`, `IAutoEntityService`, `SQS`, `ObjectMap`, `SliderBase`, `Servers`, `ITestStep`, `RobotHost`, `GlobalCredentials`, `WasmQueryData`, `GetPermissionPolicyCommandInput`, `ErrorSubscriptionFn`, `BotonicEvent`, `ICategoryCollection`, `OpenSearchDashboardsReactContext`, `MoonbeamDatasource`, `ActionTicketParams`, `RE6Module`, `FieldNode`, `HttpMetadata`, `Fig.Spec`, `net.Socket`, `ExpressionRenderError`, `iElementInfo`, `TStyleSheet`, `UpdateAppInstanceUserCommandInput`, `PyrightPublicSymbolReport`, `MetadataArgsStorage`, `RawMetricReport`, `AsyncSink`, `UiActionsService`, `ViewProps`, `ethers.providers.BlockTag`, `TableSearchRequest`, `Inspector`, `LogsConfig`, `ScaleBand`, `UrlFormat`, `ArenaAllocationResult`, `QCfg`, `SlideDefinition`, `ParserMessageStream`, `EditionId`, `TreeState`, `ModalDialogParams`, `WebpackConfig`, `IFilterTarget`, `SeedOnlyInitializerArgs`, `SvgItem`, `HardhatRuntimeEnvironment`, `P2PMessagePacket`, `BodyProps`, `a.Expr`, `LogWriteContext`, `Spine`, `WildcardProperty`, `Quill`, `IAddresses`, `LocalTitle`, `ListJobRunsCommandInput`, `HIRNode`, `BreadcrumbsProps`, `FabRequestResponder`, `Procedure`, `PoolClient`, `DaffCartStorageService`, `IHttpClientResponse`, `TemplateArguments`, `SecurityDataType`, `WebPhoneSIPTransport`, `DeleteRequestBuilder`, `Hunspell`, `PageContext`, `ColorPoint`, `PagesLoaded`, `SideType`, `MessageStateWithData`, `IBankAccount`, `FeedFilterFunction`, `ParsedAccountBase`, `AnimGroup_TexMtx`, `ListDatasetEntriesCommandInput`, `SakuliCoreProperties`, `QState`, `ObjectOf`, `ResizeChecker`, `ConfigParameters`, `K3dClusterNodeInfo`, `EnvironmentResource`, `SelectToolConfig`, `$T`, `d.PrerenderUrlResults`, `AvailableProjectConfig`, `S3`, `LayoutService`, `PlanPriceSpecManager`, `ReputationToken`, `IZoweUSSTreeNode`, `Twit`, `RefList`, `HTTPNetworkInterface`, `PolicyResponse`, `GcListener`, `PvsDefinition`, `nsIURI`, `SignedOrder`, `ConeRightSide`, `NamespaceGetter`, `BlenderPathData`, `NormalizedReadResult`, `PushPathResult`, `ReadModelInterop`, `ELanguageType`, `FullDir`, `GeometryData`, `BlockchainWalletExplorerProvider`, `POCJson`, `NonMaxSuppressionResult`, `GraphQLFieldConfig`, `SessionTypes.RequestEvent`, `SavedObjectAttributes`, `CronExpression`, `forceBridgeRole`, `TestExplorer`, `MiddlewareMap`, `LodopResult`, `MultiValueProps`, `ScmFileChangeNode`, `DominoElement`, `ShareStore`, `FileList`, `ModelDispatcher`, `MockComponent`, `BrowserFields`, `EquipmentService`, `OpenApiApi`, `ControllerClass`, `EdgeGeometry`, `GlitzClient`, `CommonLayoutParams`, `ClientData`, `ErrorAction`, `AdditionalProps`, `PermissionsService`, `Cartographic`, `ContinueResponse`, `GeoLevelInfo`, `ChatErrors`, `PartialLax`, `unitOfTime.Base`, `DescribeAppInstanceUserCommandInput`, `Port`, `TabElement`, `AnimationComponent`, `JoinPredicate`, `IBucketAggConfig`, `StorageIdentifier`, `PackageRelativeUrl`, `PrivateUserView`, `Comparator`, `Gate`, `CartoonOperatipnParams`, `This`, `BucketSegment`, `StripeSetupIntent`, `Types.IResolver`, `LspDocuments`, `RobotApiErrorResponse`, `SymbolicTensor`, `ContinuousParameterRange`, `TwingNodeExpression`, `IssuerPublicKeyList`, `SentInfo`, `t.IfStatement`, `ProjectInterface`, `CommonInterfaces.Plugins.IPlugin`, `RendererOptions`, `AsyncStateNavigator`, `ICurrentArmy`, `Ticket`, `ViewDefinitionProps`, `UnixTerminal`, `ProsodyFilePaths`, `GetRuleGroupCommandInput`, `SetModel`, `GeneratedQuote`, `NodeSDK`, `ToastRequest`, `FetcherField`, `ScaleOrdinal`, `IProjectInfo`, `ColorOverrides`, `IInterpreterRenderHandlers`, `FieldEntity`, `nockFunction`, `ISettingStorageModel`, `FeederPool`, `VirtualScope`, `FileResource`, `JulianDay`, `GitCommittedFile`, `IThrottler`, `reminderInterface`, `Animated.Animated`, `IBazelCommandAdapter`, `GameEvent`, `E`, `Environments`, `Printable`, `CompaniesService`, `NativeSyntheticEvent`, `NavigationNavigator`, `KeccakHash`, `ApplicationTypeGroup`, `MergeableDeclarationSet`, `DeleteAppCommandInput`, `requests.ListInstanceAgentPluginsRequest`, `TxMassMigration`, `NotificationProperty`, `SocketConnection`, `StdFunc`, `Tagging`, `TileMetadataArgs`, `ValidationProblemSeverity`, `IView`, `LoadBalancerListenerContextProviderPlugin`, `SnippetString`, `FilterOperator`, `EdmT`, `DeleteUserCommandInput`, `UseTransactionQueryOptions`, `RangeRequest`, `VersionEdit`, `StyletronComponent`, `BorderRadiusDirectional`, `reduxLib.IState`, `PropItem`, `LambdaServer`, `GenericIndexPatternColumn`, `DescribeFargateProfileCommandInput`, `CanvasTypeHierarchy`, `HTMLVideoElement`, `Fillers`, `W6`, `FundingCycleMetadata`, `IRawLoadMetricReport`, `NetworkName`, `Stereo`, `StreamOptions`, `TagSpec`, `VisitOptions`, `WithExtendsMethod`, `StringBuilder`, `EntContract`, `ScrollerAnimator`, `SObjectDescribe`, `OAuthCredential`, `TagValue`, `Sha512`, `FixedDepositsService`, `NormMap`, `AnalyticsModule`, `ShowConflictsStep`, `Matrix3x3`, `GravityArgs`, `FeatureProps`, `MIRPrimitiveListEntityTypeDecl`, `ModelCompileArgs`, `MongoQueryModel`, `FeeRate`, `Conversation`, `EsAssetReference`, `AllowArray`, `PingPayload`, `Alignment`, `IProc`, `PermissionsCheckOptions`, `CreateHotToastRef`, `TRANSFORM_STEP`, `VariantGeometry`, `LegendOptions`, `IAsyncEqualityComparer`, `RouteValidationResultFactory`, `Node3D`, `ContractsState`, `requests.ListAutonomousDatabasesRequest`, `UserThemeEntity`, `ILectureModel`, `Pattern`, `ColumnConfig`, `RBXScriptConnection`, `HelmRelease`, `interfaces.BindingWhenOnSyntax`, `MatchedFlow`, `GetDomainNameCommandInput`, `HandlerNS.Event`, `HTMLAttribute`, `FacetsState`, `QUnitAssert`, `IFollow`, `LazyCmpLoadedEvent`, `EditValidationResult`, `ScriptingLanguage`, `WriteLock`, `LegacyCallAPIOptions`, `Babel`, `Blip`, `ValueMetadata`, `iTunesMusicMetaProvider`, `PersonEntity`, `MdxListItem`, `RuleFunctionMeta`, `ng.ui.IStateProvider`, `Assembly`, `MessagesService`, `VcsItemConfig`, `TTK1`, `MemberRepository`, `MaxAge`, `ModuleBuilderFileInfo`, `TransactionVersion.Mainnet`, `TModule`, `TypeScriptVersion`, `ExpressionAstFunctionBuilder`, `Spherical`, `restm.IRestResponse`, `RawRuleset`, `PublicApi`, `FileBrowserItem`, `SimulcastUplinkObserver`, `SafeVersion`, `Locale`, `d.HttpRequest`, `GX.CompCnt`, `MediaStreamOptions`, `LspDocument`, `TranslationGroup`, `GraphicContentProps`, `ConsolidatedCertificateRequest`, `ISortOptions`, `LoggingConfig`, `IStrokeHandler`, `Dummy`, `AudioRule`, `VorlonMessage`, `ISocketBase`, `FrontCardsForArticleCount`, `RedisCache`, `d.JsonDocsProp`, `Col`, `SeparatedNamedTypes`, `interfaces.Binding`, `DAL.DEVICE_ID_COMPASS`, `NSDateComponents`, `ChangeListener`, `IStatusButtonStyleProps`, `WebAccount`, `OfAsyncIterable`, `TransactionSegWit`, `TestKafka`, `LeftRegistComponentMapItem`, `ExprListContext`, `UserInfoResource`, `WrappedWebSocket`, `TreeModel`, `TableComponentProps`, `IMusicMeta`, `types.signTx`, `DispatchOptions`, `TabularSource`, `admin.firestore.DocumentSnapshot`, `IGradient`, `PlanSummaryData`, `SYMBOL`, `TEObject`, `VideoModes`, `EFood.Session`, `IConnect`, `SkeletonHeaderProps`, `GraphqlApi`, `FilterTrailersStatusValues`, `SwaggerPathParameter`, `ts.Program`, `Options.Publish`, `ComponentMetaData`, `IDriverInfo`, `OsmRelation`, `JSEDINotation`, `CallCompositeStrings`, `Substitution`, `JsonAPI`, `BotMiddleware`, `CharacterSet`, `ExpressRequestAdapter`, `ObserveForStatus`, `WebSiteManagementModels.Site`, `AttributeParser`, `PluginDefinition`, `NamedFluidDataStoreRegistryEntries`, `CustomSkillBuilder`, `FilmQueryListWrapper`, `PermissionConstraints`, `MatTableDataSource`, `DeleteJobResponse`, `Flags`, `Floating`, `SnapshotAction`, `FilterEvent`, `ITileDecoder`, `ResolvedConfigFileName`, `LightSet`, `GitStatusFile`, `NumberValueSet`, `HipiePipeline`, `PrintStackResult`, `IBuildConfig`, `AttestationsWrapper`, `ma.TaskLibAnswers`, `TreeGridTick`, `EventInterpreter`, `IBoxSizing`, `RendererElement`, `ToastType`, `FutureWalletStore`, `AnyResource`, `CubicBezierAnimationCurve`, `angular.auto.IInjectorService`, `L2Args`, `InvocationContext`, `UserConfigDefaults`, `ArDriveAnonymous`, `WebsiteScanResultProvider`, `HostString`, `TestSandbox`, `MsgPieces`, `GetReviewerStatisticsPayload`, `ExprEvaluatorContext`, `TmdbMovieResult`, `HttpResponseRedirect`, `LeakDetectionSignal`, `FloatingLabel`, `d.FsReaddirOptions`, `PropertyResolveResult`, `MatcherGenerator`, `DashboardProps`, `IFileModel`, `SwipeActionsEventData`, `CommerceTypes.CurrencyValue`, `UpdateDestinationCommandInput`, `ComponentReference`, `AmqpConnection`, `YearCell`, `FirmwareUpgradeIpcResponse`, `IMenuItem`, `ProcessInfo`, `Label`, `TimeoutRacer`, `RumSessionManager`, `StartRecordingRequest`, `MutableSourceCode`, `ListConfigurationSetsCommandInput`, `BaseSkillBuilder`, `IChannelsDatabase`, `DocumentSpan`, `OverlapRect`, `KanbanRecord`, `Name`, `CartesianTickItem`, `PassNode`, `ApplicationService`, `LegacyTxData`, `DataFrameAnalyticsStats`, `GenericNotificationHandler`, `Dependency1`, `SerializedStyles`, `BuildrootUpdateType`, `Icu`, `alt.Player`, `IRegisterNode`, `ClipShape`, `ExecutionOptions`, `Endpoints`, `IBuildTaskPlugin`, `SignShare`, `Path2D`, `Electron.BrowserWindowConstructorOptions`, `BuilderOptions`, `BasicColumn`, `OmitInternalProps`, `NavigatorOptions`, `IsWith`, `IWmPicture`, `PowerAssertRecorder`, `Callbacks`, `HapiHeaders`, `AppStateType`, `SRule`, `ContextMenuProps`, `InstallForgeOptions`, `IUserPP`, `AnalyticsConfig`, `TemplateProviderBase`, `AssociationValue`, `azure.Context`, `Vector4`, `ColorType`, `PaymentResponse`, `ZipLocalFileHeader`, `btVector3Array`, `PatchFunction`, `EventSystemFlags`, `SummaryCalculator`, `IChannelAttributes`, `NotificationsStart`, `ResponsiveProp`, `MenuID`, `SpaceBonus`, `LitElement`, `ProviderPosition`, `ParseAnalysis`, `ObjectFieldNode`, `BotSpace`, `MDL0`, `CreateReplicationConfigurationTemplateCommandInput`, `Captcha`, `CreateIPSetCommandInput`, `SolcOutput`, `CurveChain`, `NotificationPermission`, `TestSink`, `OptionalObject`, `ThemeCoreColors`, `ContextWithFeedback`, `ArrayWrapper`, `InternalSession`, `URLQuery`, `SidebarProps`, `BackgroundFilterSpec`, `DebugProtocol.LaunchResponse`, `LocatorExtended`, `MDCChipCssClasses`, `AttrParamMapper`, `PuppetCacheContactPayload`, `BlockchainExplorerProvider`, `ExpressionsCompilerStub`, `DBOp`, `Tensor6D`, `AttributePub`, `CodeActionsOnSave`, `PerformanceObserverEntryList`, `IError`, `React.UIEvent`, `videoInfo`, `TaskProps`, `BotAction`, `SavedObjectsResolveResponse`, `ObjectSchemaProperty`, `ITodoState`, `SerializableValue`, `IStateCallback`, `CodePointPredicate`, `IotRequestsService`, `InputToken`, `IdentityClient`, `ISnippetInternal`, `JPABaseParticle`, `MediaPlaylist`, `UniqueSelectionDispatcherListener`, `IMainClassOption`, `Watermark`, `IVideoService`, `InMemoryProject`, `RegionLocator`, `DynamoDB.PutItemInput`, `UsersController`, `InsertEvent`, `Metadata_Add_Options`, `SubSymbol`, `ShapePath`, `EventCategory`, `CacheStorageKey`, `ITestScript`, `Creator`, `DocumentChangeAction`, `cg.Color`, `OrderForm`, `ArgumentsCamelCase`, `PanelLayout`, `WebGLComponent`, `tfconv.GraphModel`, `IScreenInstance`, `SerialBuffer`, `Vp8RtpPayload`, `AbstractToolbarProps`, `ProtocolMapperRepresentation`, `HdEthereumPayments`, `SortableEdge`, `DAL.DEVICE_OK`, `CodebuildMetricChange`, `GfxAttachmentP_WebGPU`, `MonoTypeOperatorAsyncFunction`, `ZoneModel`, `PA`, `AttendeeModel`, `SitecorePageProps`, `MiddlewareArray`, `IdentityView`, `SKLayer`, `TransformProps`, `LoggerText`, `OptionEntry`, `FN`, `QueryEngineBatchRequest`, `GrpcConnection`, `OptionService`, `OptionalKind`, `DBDoc`, `BuildConditionals`, `BatchRequestSerializationOptions`, `SavedObjectsRemoveReferencesToOptions`, `XHRResponse`, `ExpressionOperand`, `IBufferView`, `ReCaptchaInstance`, `TSDNPromise.Reject`, `ListingNodeRow`, `OptionsNameMap`, `ClientMetricReport`, `MDCChipActionFocusBehavior`, `DAL.DEVICE_ID_MSC`, `NestedRoutes`, `TestLedgerChannel`, `OAuthClient`, `GetItemFn`, `FileCodeEdits`, `ModuleInstanceState`, `ManifestMetaData`, `FunctionalComponent`, `OperationURLParameter`, `AlternatingCCTreeNode`, `ApolloResponse`, `DateFnsConfigurationService`, `IDiagram`, `ListImagesResponse`, `TestContextData`, `EmitFiles`, `IWebPartContext`, `SuiteThemeColors`, `IAttrValue`, `FlattenedXmlMapWithXmlNameCommandInput`, `ANodeStm`, `ContractReceipt`, `ModeName`, `ResourceDetails`, `ElementAnalysis`, `IProtoNode`, `Factory`, `LayerRecord`, `TLE.StringValue`, `Symbol`, `TfCommand`, `BufferWriter`, `StackNavigationProp`, `XPCOM.nsIChannel`, `DataTexture`, `lsp.Position`, `Cheerio`, `ITransform`, `ThematicDisplayProps`, `RequestForm`, `ListTagsForResourceOutput`, `GroupTypeUI`, `requests.ListRunsRequest`, `ITEM_TYPE`, `IRGBA`, `IModuleMinificationResult`, `ListGroupUsersRequest`, `FacetOption`, `SecondaryIndexLayout`, `VpcTopologyDescription`, `ResourceState`, `RedisCommandArguments`, `L`, `AuthGuard`, `CustomBinding`, `NoelEvent`, `WebpackError`, `CallClient`, `ILoggedProxyService`, `InstanceContainer`, `Ping`, `FileSystemProviderWithOpenReadWriteCloseCapability`, `Arg`, `Model`, `MonacoEditor`, `$p_Declaration`, `ReferenceMonthRange`, `ScaleModel`, `Settings`, `CredOffer`, `requests.ListAutonomousDatabaseDataguardAssociationsRequest`, `TabContainerPanelComponent`, `PageObject`, `Map4d`, `IncrementalNodeArray`, `StoreResource`, `FactionMember`, `TableCellPosition`, `requests.ListDomainsRequest`, `SourceService`, `ParsedSource`, `CollatedWriter`, `FormAzureStorageMounts`, `FreeBalanceState`, `GraphicStyles`, `TestHotObservable`, `ReportId`, `Sash`, `StructuredError`, `HardwareModules`, `EventInput`, `WorkRequest`, `MeshNormalMaterial`, `GX.BlendFactor`, `ListDomainsCommandOutput`, `Money`, `MessageBuilder`, `PrismScope`, `MdcRipple`, `AsyncResult`, `TransformOption`, `CallProviderProps`, `PackageName`, `OriginalDocumentUrl`, `ThyResizeEvent`, `StorageAccount`, `SearchUsageCollector`, `IDatabaseConfigOptions`, `TriumphCollectibleNode`, `TspanWithTextStyle`, `Layout`, `UseCase`, `AssertionLocals`, `Binary`, `Extract`, `grpc.ServiceError`, `SelectableTreeNode`, `YVoice`, `OperationTypes`, `nsIDOMWindowUtils`, `HttpCode`, `CommandLineTool`, `SessionImpl`, `BooleanCB`, `LightingFudgeParams`, `ItemBuilder`, `ICrop`, `Categories`, `models.RegEx`, `SetbackState`, `SymbolType`, `Datatype`, `PursuitRow`, `FileDescriptorProto`, `FSNoteStorage`, `InversifyExpressServer`, `PngPong`, `DeleteUserCommandOutput`, `DateAxis`, `UnaryOpNode`, `ResolveablePayport`, `viewEngine_ViewRef`, `IAccountInfo`, `ESLSelectOption`, `PatchFile`, `CustomImage`, `Resolve`, `TreeWalker`, `ParserInfo`, `QuestionCollection`, `StackActionType`, `NavigableHashNode`, `CloseButtonProps`, `ScriptObjectField`, `IModelRpcProps`, `CreateImportJobCommandInput`, `CountArguments`, `AppStoreModel`, `IApplication`, `CustomContext`, `PreviewSettings`, `Pagination`, `CronOptions`, `Shelf`, `LGraph`, `VcsAccount`, `SuccessAction`, `IFieldMap`, `Id`, `DescribeDatasetCommand`, `PhoneNumber`, `SagaActionTypes`, `AlertService`, `ModList`, `SmartHomeHandler`, `SCNVector3`, `VirtualApplication`, `StorageInterface`, `redis.ClientOpts`, `RemoteParticipant`, `ByteString`, `ParsedTestObject`, `IAdministrationItemRoute`, `ConfigSchema`, `IHotspotIndex`, `TestNodeList`, `RepositoryService`, `HierarchyProvider`, `CodeUnderliner`, `ConstRecord`, `ProposalManifest`, `ExtractGroupValue`, `ActionsRecord`, `DocsTargetSpec`, `OpenerOptions`, `Math.Vector3`, `MediationStateChangedEvent`, `BrowserRequest`, `IDetailsProps`, `web3.Connection`, `ChainID`, `Injectable`, `DecodedLog`, `AllowedLanguage`, `IParagraphMarker`, `DescribeClustersRequest`, `WaitTaskOptions`, `Links`, `NullableSafeElForM`, `PopoverProps`, `Fail`, `SplitDirection`, `VideoDeviceInfo`, `SpyObject`, `SubgraphDataContextType`, `BlobStore`, `TitleService`, `BoneSlot`, `ParsedUtil`, `Lumber`, `SweetAlertOptions`, `ApiOptions`, `PrintExpressionFlags`, `Region`, `WaitTask`, `ProdoPlugin`, `Autocomplete`, `Matcher`, `InputStep`, `BSPRenderer`, `TestApp`, `GeneratedKeyName`, `IResponse`, `AccentIconStyles`, `OptionValue`, `ListClustersCommandInput`, `InputTextNode`, `ChainableHost`, `Replay`, `AnalysisCompleteCallback`, `UpdateOneInputType`, `FormInputs`, `SocketService`, `Margins`, `CreateAccountCommandInput`, `TThis`, `func`, `LocalTag`, `HTMLIonBackdropElement`, `DeleteNetworkProfileCommandInput`, `BrowserFetcher`, `ServerIO`, `InstallationQuery`, `MpegFrameHeader`, `EnabledFeatures`, `requests.ListMaintenanceRunsRequest`, `JSDocNonNullableType`, `PaneProperty`, `WithCSSVar`, `IBytes`, `ParsedOrderEventLog`, `IPriceDataSource`, `TaggedTemplateExpression`, `RenderFlag`, `IconBaseProps`, `CCAPI`, `_IIndex`, `CampaignTimelineChanelsModel`, `ToneAudioBuffer`, `MemberType`, `indexedStore.FetchResult`, `TextureManager`, `Lib`, `ApplicationOpts`, `CategoryTranslation`, `SendResponse`, `MemoryRenderer`, `NotificationAndroid`, `CannonBoxColliderShape`, `ContractBuilder`, `LockedDistricts`, `Segment`, `DiagnosticWithFix`, `Architecture`, `Objkt`, `HTTP_METHODS`, `Injection`, `DefaultEditorSize`, `Highcharts.Popup`, `LegacyRequest`, `HierarchyQuery`, `CollectionProp`, `AuthParams`, `AwsOrganizationReader`, `DaffCategoryFilterRangeNumericFactory`, `OrderByStep`, `_Code`, `LaunchEventData`, `ChangeSet`, `EditableNumberRangeFilter`, `MessageReceivedListener`, `IGameObject`, `TranslationItemBase`, `GroupParameterMethod`, `ColorAxis.Options`, `EmitType`, `EvaluatedChange`, `GasMode`, `SearchSequence`, `RouterCallback`, `ProposalActions`, `StraightCurved`, `org`, `ValidateFunction`, `LegendItemExtraValues`, `TRouter`, `ComponentCompilerListener`, `PromisedComputed`, `AudioStreamFormatImpl`, `Toolbar`, `JsonRpcProxy`, `TAttributes`, `IAzureNamingRules`, `AggConfig`, `SecurityGroupRule`, `ITrackInfo`, `RegistryInstance`, `PageElement`, `EqualContext`, `RepeatForRegion`, `AuthenticateFacebookRequest`, `t.VariableDeclaration`, `UserConfigExport`, `Euler`, `FILTERS.PHRASES`, `RigidBody`, `DialogSubProps`, `Model.LibraryStoreItemState`, `IPropertyWithName`, `LayoutNode`, `ElasticPool`, `ExecController`, `InferableAction`, `TileSet`, `ShContextMenuItemDirective`, `UserSchema`, `GoogleBooksService`, `EdgeImmutPlain`, `DummyNode`, `DateMarker`, `CompilerConfiguration`, `IRenderOptions`, `IAppProps`, `SettingsCallback`, `CreateAliasRequest`, `ErrorArgs`, `AnimationPlayer`, `BasicTemplateAstVisitor`, `AutoconnectState`, `SerializedObjectType`, `ConceptMap`, `PaginationNextKey`, `FolderNode`, `ObsConfiguration`, `ColorAxis`, `NodeLoadMetricInformation`, `CallbackManager`, `QueryBuilderFieldProps`, `TECall`, `Electron.Event`, `FinalInfo`, `Processes`, `ServiceDescriptorProto`, `MapMode`, `RPiComponentType`, `PoolFields`, `IHawkularRootScope`, `GetResponseBody`, `EventCallback`, `Transform2D`, `THREE.PerspectiveCamera`, `PartialResolvedVersion`, `Drone`, `api.ITree`, `LightInfo`, `FocusZoneDefinition`, `PortInfo`, `Bm.ComposeWindow`, `NavOptions`, `ShuftiproInitResult`, `IVectorLayer`, `Hill`, `ParsedDirectiveArgumentAndInputFieldMappings`, `ClipId`, `FallbackProvider`, `HoistState`, `ConnectedPosition`, `DialogflowApp`, `IAssignment`, `RouteEffect`, `TrackId`, `ScanCommandInput`, `requests.ListManagementAgentImagesRequest`, `ParameterSpec`, `DeleteAuthorizerCommandInput`, `IFileInfo`, `B10`, `PropertyChangedEventArgs`, `ReconfigResponseParam`, `ICellInfo`, `TempFile`, `MiddlewareArgs`, `InitConfiguration`, `ColorPresentation`, `CachedKey`, `HTMLIonTabElement`, `SvelteConfig`, `Thumb`, `ISerializedActionCall`, `WithStatement`, `EditorFile`, `ActorComponent`, `OpenChannelMessage`, `BrowserSimulation`, `IOdspResolvedUrl`, `DictionaryType`, `FieldTypes`, `ParseTreeListener`, `LocationObject`, `ast.SyntaxNode`, `RatingStyleProps`, `TestMethod`, `TexMap`, `Uni.Node`, `BaseParser`, `AppHelperService`, `TestController`, `PackageChangelogRenderInfo`, `DropdownItemProps`, `com.nativescript.material.bottomsheet.BottomSheetDialogFragment`, `TouchPulse`, `Survey.Survey`, `CachedPackage`, `CollidableCircle`, `ScrollToColumnFn`, `CornerSite`, `ReactMouseEvent`, `MetricId`, `chrome.runtime.MessageSender`, `IndexingRuleAttributes`, `ExtraInfoTemplateInput`, `Sharp`, `ThyDialogContainerComponent`, `MockRouteDefinition`, `IAnyStateTreeNode`, `ABLTableDefinition`, `RequestSession`, `MangoQuerySelector`, `SelectContext`, `ToLatexOptions`, `Screwdriver`, `TagResourceCommand`, `AnimationResult`, `DescribeStacksCommandInput`, `AuthAccessCallback`, `PluginDevice`, `ICourseDashboard`, `IImageryConfig`, `Adventure`, `MetricsService`, `ModuleOptions`, `IHelpCenter`, `GetUpgradeHistoryCommandInput`, `DevToolsExtension`, `RequestDto`, `HardhatConfig`, `GX.Command`, `SlateNode`, `DocfyService`, `AppInitialProps`, `CirclinePredicateSet`, `TransformListRow`, `GtRow`, `RawContract`, `SnapshotField`, `SignedAndChainedBlockType`, `GfxRenderDynamicUniformBuffer`, `ServeAndBuildChecker`, `RESTClient`, `RowGroup`, `ScopeSymbolInfo`, `MapAdapterUpdateEnv`, `SvelteSnapshotFragment`, `ZAR.ZAR`, `OperatorSpec`, `Road`, `WalkerStateParam`, `NodeMaterialConnectionPoint`, `FormErrorsService`, `ScaleCompression`, `NgModel`, `ValidationAcceptor`, `ContextTypes`, `BaseHeader`, `SyncEngine`, `FIRStorageReference`, `PresentationRpcResponse`, `CancelSignal`, `BinStructItem`, `LegacySocketMessage`, `ExpandedBema`, `S3MetricChange`, `TypeVarMapEntry`, `ISelectHandlerReturn`, `WebApiTeam`, `ISetItem`, `FrameItem`, `BrowsingPage`, `FileContext`, `AuthorizationErrorResponse`, `FormError`, `KeyValueStore`, `ColorStateList`, `TSTypeParameterDeclaration`, `IPair`, `TSBuffer`, `CollectionNode`, `MosaicDirection`, `GitLog`, `DaffCartShippingInformation`, `Rehearsal`, `requests.ListDynamicGroupsRequest`, `AzureParentTreeItem`, `ts.ArrowFunction`, `ServiceContainer`, `LinkType`, `UIComponent`, `PrimitiveSelection`, `Expiration`, `FirestoreSimple`, `ParenthesizedTypeNode`, `RateLimit`, `Icon`, `StackNode`, `MIRConstructableInternalEntityTypeDecl`, `ProductA`, `ServiceInstance`, `TestObservable`, `TemplateWithOptionsFactory`, `DvServiceFactory`, `MathExpression`, `OpenSearchDashboardsResponse`, `HistoriesService`, `NewsItem`, `OutputItem`, `IosBinding`, `ArrayPattern`, `ExportedConfig`, `SvelteIdentifier`, `CandidateResponderRule`, `SPHttpClientResponse`, `ImgType`, `Title`, `ConnectionListener`, `SeriesList`, `HookEnvironment`, `GameChannel`, `TriggerConfig`, `KoaMiddleware`, `core.ETHGetAccountPath`, `GrowableXYZArrayCache`, `Notify`, `AccountSetBase`, `IDataSourcePlugin`, `SortStateAPI`, `PlainObjectOf`, `GeneratorExecutor`, `AbstractField`, `UsePaginatedQuery`, `aws.autoscaling.Policy`, `VertexAttributeInput`, `ListDbSystemsRequest`, `SyncableElement`, `Chalk`, `AttributeListType`, `NonemptyReadonlyArray`, `KernelMessage.IIOPubMessage`, `ShellExecution`, `JSXOpeningElement`, `AbiStateObject`, `sdk.IntentRecognizer`, `IMiddlewareHandler`, `ISpriteAtlas`, `MongoIdDto`, `R2`, `CreatedObject`, `App.windows.window.IClassicMenu`, `IAudioSource`, `UpdateChannelError`, `HyperlinkMatch`, `IntegerList`, `BasketSettings`, `AnimationDefinition`, `ClearableMessageBuffer`, `NodeExtensionSpec`, `JsExport`, `ArrayPropertyValueRenderer`, `GaugeDialogType`, `UnderlyingSource`, `MergeResults`, `ASTCodeCompatibilityReport`, `InvalidSubnet`, `ConchVector3`, `PotentialEdge`, `ExternalCliOptions`, `KeywordTypeNode`, `ListIPSetsCommandInput`, `LineBasicMaterial`, `XAxisTheme`, `PluginConfigDescriptor`, `Dialect`, `IfExistsContext`, `PiConcept`, `PreferenceService`, `Difference`, `SO`, `props`, `TSSeq`, `IFunctionWizardContext`, `UserDevices`, `IDashboardConfig`, `JsonRpcResponsePayload`, `ResolveOutputOptions`, `CodeMirror.Doc`, `ExtendOptions`, `ComputedParameter`, `UseSavedQueriesProps`, `OffsetOptions`, `SelectSpace`, `Height`, `CoapPacket`, `DirectoryInfo`, `IExternalFormValues`, `IdentityContext`, `mitt.Handler`, `EXECUTING_RESULT`, `moneyMarket.market.BorrowerInfoResponse`, `AdagradOptimizer`, `IFilterItem`, `CommentAttrs`, `IChangesState`, `DashboardUrlGeneratorState`, `IUiState`, `NavigationService`, `IPropertyWithHooks`, `LinkOpts`, `ExtendedFeatureImportance`, `ReactExpressionRendererProps`, `CmsModelField`, `ExploreResult`, `TemplateFileInfo`, `Communicator`, `IStreamApiModel`, `MidwayFrameworkType`, `IDeployedContract`, `DfDvNode`, `BoostStyleProps`, `PaginationConfig`, `i18next.TFunction`, `PragmaNameContext`, `StopwatchResult`, `NodeImpl`, `FormatToken`, `ArkApiProvider`, `ParsingResult`, `CacheEntry`, `IWithHistory`, `monaco.languages.CompletionItem`, `INodeFilter`, `PageDescriptor`, `LookupByPath`, `SignInResult`, `RegionMetadataSchema`, `Refs`, `ModdleElement`, `ImportFromNode`, `QueryProviderAttributesRequest`, `SessionService`, `LIGHT_INFLUENCE`, `SnapshotDetails`, `Canceler`, `ChildNode`, `ArchiveHeader`, `ShadowsocksManagerServiceBuilder`, `MomentInput`, `IAPIService`, `WebsocketState`, `INotificationsService`, `SourceFileEntry`, `KeySuffixOptions`, `PropertySignature`, `vscode.Uri`, `ResponseCV`, `PlaywrightClientLike`, `ScaleByBreakpoints`, `BundleOrMessage`, `DotLayerArgs`, `ConnectorProperty`, `MetricData`, `TReducer`, `KanbanList`, `HttpHealthIndicator`, `ClientType`, `PropertyKnob`, `ts.ParsedCommandLine`, `CustomHelpers`, `ConnectionFetcher`, `NohmModel`, `RequestProgress`, `XEvent`, `CaBundle`, `ClientMessage`, `VaccinationEntry`, `DefaultAttributeDefinition`, `DamageEvent`, `TAccum`, `GitHubPRDSL`, `MyClassWithReturnArrow`, `LoginReq`, `StreamAction`, `SegmentedControlProps`, `XcodeCloud`, `EntryNested`, `SelectedScript`, `fixResult`, `KibanaSocket`, `IAugmentedJQuery`, `ClassLexicalEnvironment`, `StyleDeclaration`, `TileMap`, `ServiceTypeSummary`, `CompiledBot`, `DeletePolicyRequest`, `CoapRequestParams`, `RenderPassToDefinitionMap`, `JSParserOptions`, `ListTagsForStreamCommandInput`, `DehydratedState`, `ESTermSourceDescriptor`, `KWin.Client`, `IInputType`, `ConfigUpsertInput`, `IMessageResponse`, `SyncConfig`, `GearService`, `MIREphemeralListType`, `CppArgument`, `NameSpace.WithEnum`, `AsApiContract`, `XAnnotation`, `UnionRegion`, `GeometriesCounts`, `IJoin`, `DirectiveLocation`, `PaymentChannel`, `RoomSettings`, `GraphQLField`, `ITagNode`, `TheiaBrowserWindowOptions`, `ConfigDict`, `AudioProfile`, `CampaignTimelinesModel`, `graphql.GraphQLFieldConfigMap`, `Destroyable`, `TRPCErrorResponse`, `RootElement`, `Notice`, `anchor.web3.PublicKey`, `SessionData`, `PluginDeleteAction`, `CheckOptions`, `IStackStyles`, `ScriptCmd`, `Vehicle`, `ChangeCipherSpec`, `ITasks`, `InteractionReplyOptions`, `RecordStringAny`, `ActionCreatorWithoutPayload`, `DropdownComponent`, `DragInfo`, `CodeLocation`, `NewRegistrationDTO`, `DockerConfig`, `FileHandle`, `UpdateJobResponse`, `ImageIdentifier`, `NoteworthyApp`, `IndexedClassMapping`, `PartialVersionResolver`, `ArrayService`, `NSObject`, `InstallWithProgressResponse`, `MemberAccessInfo`, `V1ContainerStatus`, `ChatMessageReadReceipt`, `HassEntity`, `AssetPublishing`, `CodeItem`, `BaseFormValidation`, `BaseMarker`, `interfaces.MetadataReader`, `NSDatabase.ITable`, `requests.ListPrivateIpsRequest`, `ScaleHandle`, `NVM500NodeInfo`, `Products`, `Details`, `FieldRule`, `TString`, `SubProg`, `DecompiledTreeProvider`, `EntityAdapter`, `ScenarioCheckInput`, `RelationQueryBuilder`, `IconPack`, `ArweaveAddress`, `Popover`, `GitBlameLine`, `IconOptions`, `Http3Request`, `IVertex`, `MaskObject`, `web3.PublicKey`, `PathLike`, `BroadcastOperator`, `Glissando`, `OrderBalance`, `IInterceptors`, `LinkedPoint`, `SfdxError`, `TArg`, `CoreEventHandlers`, `Local`, `ISplitIndex`, `dia.Graph`, `StorageBackend`, `D3Link`, `ParametersHelper`, `OneInchExchangeMock`, `FlatCollection`, `WebcamIterator`, `NotificationState`, `NodeLocation`, `IMigrator`, `MetaDataOptions`, `StateMachine`, `Cpu`, `DataTypeFields`, `KeyOctave`, `p5`, `ActionLogger`, `LunarMonth`, `YogaNode`, `DeployState`, `NetInfoState`, `SVGTSpanElement`, `MVideoId`, `SliderInstance`, `CssClasses`, `GitHubInfo`, `MediaProvider`, `ProductOptionService`, `MockedObjectDeep`, `MessageComponentInteraction`, `RawRustLog`, `Slider`, `PickKeyContext`, `TechnologySectionProps`, `ListEntitiesCommandInput`, `vscode.NotebookData`, `UpdateArticleDto`, `ContainerArgs`, `BaseGraph`, `OffchainDataWrapper`, `IStatusResult`, `OutputTargetDistCustomElements`, `BlockNodeRecord`, `DisplayDataAmount`, `DateInterval`, `TranslateHttpLoader`, `XmlListsCommandInput`, `protos.common.IApplicationPolicy`, `RevalidatorOptions`, `ComponentSet`, `TextMap`, `MemoryDb`, `WholeJSONType`, `Awaitable`, `MapLocation`, `IAuthCredential`, `DeleteTagsCommandOutput`, `k8s.Provider`, `DiscoverTypings`, `FeedbackData`, `ReadOnlyFunctionResponse`, `ContextSet`, `Studio`, `ASRequest`, `CustomOracleNAVIssuanceSettings`, `GasTarget`, `SubmissionController`, `SwaggerSpec`, `CustomRequestOptions`, `iDraw`, `GaxCall`, `WorkspaceManager`, `NotificationError`, `IGraphData`, `ContractEntryDefinition`, `CallEffect`, `AppAuthentication`, `Auth`, `LabelModel`, `Walker.Store`, `Web3Provider`, `InMemoryStorage`, `PropertyData`, `ModelRef`, `Chai.Should`, `apid.VideoFile`, `ComponentChildren`, `EmployeeInfo`, `Organizations`, `CommandNode`, `AssignmentKind`, `Waypoint`, `CustomerModel`, `NamedField`, `IProp`, `PatternCache`, `Values`, `FormValidationResult`, `SimpleProgramState`, `BehaviorName`, `UndoPuginStore`, `d.WorkerMainController`, `KeypairBytes`, `InspectorViewProps`, `ZodIssue`, `SignatureReflection`, `HTMLBRElement`, `GLRenderPassContext`, `TeamsActions`, `IMoveFocusedSettings`, `DocViewsRegistry`, `Cube`, `Goal`, `White`, `HandPoseConfig`, `NotFoundException`, `IQueryParams`, `TableConfiguration`, `MethodOrPropertyDecoratorWithParams`, `Dsn`, `ArchTypes`, `ValidateOptions`, `SpringResult`, `AuditorFactory`, `CollectionBundleManifest`, `ICrudListQueryParams`, `WatchedFile`, `FluidObjectMap`, `StructType`, `ChangedDataRow`, `OidcClientService`, `DynamicEntries`, `PutBucketPolicyCommandInput`, `vsc.CancellationToken`, `JoyCon`, `TimeoutErrorMode`, `UserPrivilegeService`, `AdbBufferedStream`, `DataHandle`, `TestWorker`, `WebOutput`, `PageActions`, `ObservableArray`, `DataTable.ColumnCollection`, `Descendant`, `Node.DepositParams`, `BindingOptions`, `TransformedStringTypeKind`, `ScanMessage`, `NestedCSSProperties`, `EvaluateFn`, `HttpResponseBadRequest`, `CompositeParserException`, `SceneEmitterHolder`, `TD.ThingProperty`, `ApiPipelineVersion`, `IndividualTreeViewState`, `MarketHistory`, `InteractiveStateChange`, `FunctionTypeParam`, `IMaterialUniformOptions`, `IndicatorCCReport`, `TmGrammar`, `NzTreeNodeOptions`, `OAuthService`, `DateFormattingContext`, `DomManipulation`, `AppCompatActivity`, `DeleteMembersCommandInput`, `IParty`, `WrappingMode`, `GraphicsLayer`, `BoundBox`, `GLTFPrimitive`, `Entity.List`, `GameInfo`, `IMyTimeAwayItem`, `ShowProps`, `TriggerAction`, `FixHandlerResultByPlugin`, `SyntaxInterpreter`, `RemoveOutputRequest`, `CGOptions`, `VariableGroupDataVariable`, `V2`, `NotebookNamespace`, `effectOptionsI`, `EditPoint`, `SignalValues`, `ZWaveLogContainer`, `TooltipPortalSettings`, `ClientConfig`, `InlineFragmentNode`, `APIResponse`, `FullChat`, `PutAssetPropertyValueEntry`, `DataDocument`, `HslaColor`, `ThyTransferSelectEvent`, `PipelineStage`, `IEdge`, `GlobalScript`, `MousecaseResult`, `ContextType`, `TNerve`, `TransportParameterId`, `DeleteDocumentCommandInput`, `v`, `EnumIO`, `TypeEnv`, `ContainerBindingEvent`, `RouteHealthCheckResult`, `m.Comp`, `SignatureTypes`, `SignatureAlgorithm`, `T2`, `Node2D`, `CustomConfig`, `PrimaryContext`, `SourceFile`, `ExportingOptions`, `SpectrogramData`, `LegendValue`, `AwaitEventEmitter`, `TimeSlot`, `PoolConfig`, `HapiResponseObject`, `HomeProps`, `AsyncOpts`, `ResolvedStyle`, `DataDrivenQuery`, `SpyLocation`, `CompilerSystemRemoveDirectoryOptions`, `ObservableLike`, `UserNotification`, `cc.Event.EventMouse`, `EmployeeService`, `ViewItem`, `JVertex`, `FieldType`, `Address6`, `NonCancelableCustomEvent`, `FlowTypeTruthValue`, `StartMigrationCommandInput`, `Minion`, `RuledSweep`, `ListPackagesForDomainCommandInput`, `RedisCommandArgument`, `OperatorDescriptorMap`, `IVersion`, `AlertStatus`, `TypeGuard`, `OptimisticLockError`, `io.ModelArtifacts`, `HTMLImageElement`, `DeleteBranchCommandInput`, `TaroEvent`, `FieldFormatConfig`, `UpdateAccountCommandInput`, `ValidatorError`, `requests.ListManagedInstancesRequest`, `CreateApplicationCommandInput`, `ListingDefinitionType`, `LibraryComponent`, `AuthenticationFlowRepresentation`, `RowRendererProps`, `SerializedEvent`, `ast.PersistNode`, `OrganizationProject`, `DeferredImpl`, `StringNode`, `ModelItem`, `SchemaHelper`, `ModelCheckResult`, `AerialMappers`, `MultiFn2`, `IGroupData`, `EventFnError`, `SchemaGenerator`, `Executable`, `vscode.Extension`, `ObjectConstructor`, `FieldTypeSelectOption`, `ClassMethodDefinition`, `ScalarNode`, `WmsLayer`, `UnidirectionalTransferAppState`, `Success`, `DecodedLogEntryEvent`, `CheckItem`, `B14`, `TeamSpaceMembershipProps`, `PartyLeave`, `ContextPosition`, `QuotaSetting`, `Series.PlotBoxObject`, `ChangeAnnotationIdentifier`, `StartupInfo`, `CachedToken`, `PreviewPicture`, `StructCtor`, `IRunResult`, `CdsTreeItem`, `DescribeFileSystemsCommandInput`, `SymbolId`, `TokenDetailsService`, `Sein.IResourceState`, `CreateUserResponse`, `domain.Domain`, `CreateVpcPeeringConnectionCommandInput`, `BaseElement`, `ForgotPasswordVerifyAccountsValidationResult`, `PluginDiscoveryError`, `BlobBeginCopyFromURLResponse`, `BackgroundRepeatType`, `DiscordInteraction`, `BroadcastMode`, `SyntaxCursor`, `IIFeedsState`, `CSharpType`, `Op2`, `SessionClient`, `ProcessStorageService`, `VertoMethod`, `SimpleStateScope`, `PushOptions`, `CSSValues`, `ModelConstructor`, `MockedElement`, `RenderNodeAction`, `ElementArray`, `TooltipStateReturn`, `ITranslator`, `UNKNOWN_TYPE`, `HTTPHeader`, `CallHierarchyOutgoingCall`, `InstanceManager`, `ServiceWorkerVersion`, `FileUploadService.Context`, `GetCollapsedRowsFn`, `Float32List`, `ITodoItem`, `QRCodeScheme`, `AutoRenderOptionsPrivate`, `MachineEvent`, `IInstallManagerOptions`, `ViewPortHandler`, `MutationHandler`, `ecs.ContainerDefinitionOptions`, `WorkerMessageType`, `Clue`, `TypeAliasDeclaration`, `SGGroupItem`, `ICriteriaNode`, `TestFactory`, `PagerAdapter`, `AddressBookContact`, `AxisLabelFormatterContextObject`, `ServiceProxy`, `JSONRPCRequest`, `CliCommandProvider`, `EmbedToken`, `LedgerReadReplyResponse`, `DescriptorIndexNode`, `ConfigDeprecationProvider`, `KeyValueDiffers`, `ObjectData`, `AssertClause`, `FormatStringNode`, `ClientMatch`, `Workflow`, `AccountFacebookInstantGame`, `DiffInfo`, `BSPEntity`, `BitmapText`, `ISource`, `ProfilerConfig`, `DocumentProcessorServiceClient`, `RootLabel`, `OdmsPhaseActions`, `DynamicFormControlLayout`, `AnimationStateMetadata`, `AstNodeDescription`, `JoinClause`, `formatting.FormatContext`, `ChartAnimator`, `WalletKey`, `WordcloudUtils.PolygonObject`, `UIBeanStorage`, `Flavor`, `Twitter.Status`, `IPaneContent`, `DurableOrchestrationStatus`, `SavedObjectTypeRegistry`, `Fixed`, `AtomicAssetsHandler`, `LayerWeightsDict`, `JSONRPCProvider`, `blockchain_txn`, `TwoWayRecordObservable`, `WorkspacePath`, `RequestTask`, `AfterCaseCallback`, `DefaultGeneratorOptions`, `LOGGER_LEVEL`, `Asteroid`, `ErrorEvent`, `MongooseQueryParser`, `TableSuggestionColumn`, `BodyState`, `GameScene`, `FormatFactory`, `IContentFilter`, `DatabaseState`, `STColumnFilterMenu`, `QuantifierResult`, `IUserItemOptions`, `TransactionWithBlock`, `StateUpdatedEvent`, `StatusParams`, `TIO`, `ConsoleWidget`, `TwitchChat`, `MethodParam`, `OrganizationVendorService`, `CalculateInput`, `IOSInput`, `ExecutorMessages`, `CausalRepoBranch`, `SelectedPaths`, `ARPosition`, `CanvasSpaceValues`, `RestoreFn`, `sdk.ConversationTranscriber`, `GraphQLAbstractType`, `CellInterval`, `VcsAuthenticationInfo`, `SubmissionObjectState`, `ByteStream`, `MoveLandedType`, `CirclineArc`, `MessageRemoteImage`, `Queries`, `DaLayoutConfig`, `HttpStatus`, `Checkbox`, `IEmployeePresetInput`, `MarkdownParsedData`, `S2ExtensionType`, `Pred`, `ListViewProps`, `MultiDictionary`, `CreateStateContainerOptions`, `DateParts`, `LogObj`, `VisualizationOptionStore`, `PkSerializer`, `monaco.languages.ProviderResult`, `BaseCallbackConstructor`, `HDOMNode`, `SflTester`, `NewLineType`, `TreeModelChanges`, `EntitySchemaDatatype`, `JurisdictionDomainModel`, `IController`, `SavedObjectsExportTransformContext`, `RoughRenderer`, `CategoryState`, `ParameterDecorator`, `TemplateDiff`, `Html5QrcodeScannerState`, `Postfixes`, `MsgBlock`, `GX.CompareType`, `Interpreter`, `IArticleData`, `DescribeInputCommandInput`, `TabulatorThingChanges`, `StyleResults`, `BlockchainEnvironmentExplorerProvider`, `ShapeProps`, `ThemeMode`, `SpringFn`, `DelegateBuilder`, `RunShellResult`, `TestsManifest`, `WhileNode`, `DominantSpeakersInfo`, `SavedQuery`, `WhereClause`, `QueryAccountsRequest`, `SentMessageInfo`, `Panels`, `UrbitVisorConsumerTab`, `IConditionalTag`, `ControllerSpec`, `ApiClientConfiguration`, `JavaRenderer`, `OpenFile`, `ethereum.TransactionReceipt`, `E.ErrorMessage`, `RaribleProfileResponse`, `PolicyBuilderConfig`, `ChartPointsSource`, `PostEntity`, `PElementHandle`, `UrlGeneratorsSetup`, `CreateUserCommand`, `ParsingContext`, `ITabInfo`, `Buf`, `Bingo`, `DecadeCell`, `RenderBannerConfig`, `NamespaceDeclaration`, `GetContactCommandInput`, `StandardProjectCard`, `DepthwiseConv2D`, `OverlayBackgroundProps`, `DataTableFormatProps`, `DocumentClient`, `SecurityPolicy`, `ast.AssignNode`, `AstDeclaration`, `TagConfig`, `HumidityControlSetpointCCGet`, `TsSafeElementFinder`, `IAM`, `TriggeredEvent`, `ListTasksCommandInput`, `EventMap`, `pxt.PackagesConfig`, `PaySlip`, `ParsedIOMessage`, `Sexp`, `esbuild.Plugin`, `LVarKeySet`, `CElement`, `FetchStartedAction`, `EnumMetadata`, `Kind2`, `AuthenticationInterface`, `SvelteSnapshot`, `IText`, `ImmutableObjectiveTag`, `IFixture`, `JMapLinkInfo`, `Exercise.Question`, `IGroupFilterDefinition`, `Verify`, `SocketGraphicsItem`, `CustomError`, `DescribeWorkspacesCommandInput`, `GLint`, `MockLoadable`, `PDFObject`, `DebugProtocol.StackTraceResponse`, `StorageFormat`, `ZodObject`, `SignedVerifiableClaim`, `IHomebridgeAccessory`, `ChangeCallback`, `ListAttendeesCommandInput`, `CommentSeed`, `NameValueDto`, `BranchPruner`, `DataProcessor`, `CreateStreamCommandInput`, `Limiter`, `PossiblyAsyncOrderedIterable`, `PluginBuild`, `DocTable`, `glTF.glTF`, `OpenSearchQuerySortValue`, `ZipFileEntry`, `IndexResults`, `IKubernetesManifestCommandData`, `LifeCycle`, `Pbkdf2Digests`, `sdk.IntentRecognitionResult`, `TableReference`, `xLuceneTypeConfig`, `StyledForwardStyle`, `IScrollerInfo`, `ProjectProperties`, `ReplicaDetails`, `ProtectionRuleExclusion`, `$DFS.DFS_Config`, `SliderValue`, `LoggerProperties`, `SwankRequest`, `OutUserInfoPacket`, `BundleRef`, `Mark`, `DrawerState`, `ExtractorConfig`, `Redirect`, `vscode.WorkspaceFolder`, `INotifyItem`, `AnimationService`, `OrdererTreeItem`, `StackBuilder`, `DateTimeRecognizer`, `InputOptions`, `InputModalityDetectorOptions`, `AppState`, `Health`, `IntrospectionWarnings`, `TextureSlab`, `FirestoreConnectorModel`, `InstanceType`, `IActiveLearningSettings`, `IErrorsManager`, `SnackbarErrorAction`, `ClipPreRenderContext`, `PIXI.Renderer`, `ISaxParser`, `ApiLocatorService`, `TensorLike`, `ProviderType`, `FunctionJSON`, `Detection`, `AvatarConfig`, `GithubIssue`, `RawRow`, `ImageryLayer`, `FileRenameEvent`, `MemberSoundEffects`, `ShaderVariableType`, `ResetPasswordDto`, `GlobalEventsService`, `RelativeDateFilterTimeUnit`, `ng.IPromise`, `ExceptionlessClient`, `EngineMiddlewareParams`, `requests.ListLogGroupsRequest`, `PresentationManagerProps`, `StateAccessingOptions`, `GetCellColSpanFn`, `DataCallback`, `WalletCredentials`, `Dubbo`, `DaffCompositeProductItem`, `nVector`, `CursorMap`, `WorkerDOMConfiguration`, `TickFormatter`, `TPermission`, `ReturnT`, `RepositoryType`, `NdjsonToMessageStream`, `TimelineFilter`, `TableDataSet`, `MatchmakerMatched`, `BinaryOperator`, `ChallengeData`, `JSONMappingParameters`, `ThyTreeNode`, `ActivityAction`, `Registry`, `UserConfiguration`, `ChildProps`, `ConfigurationEnv`, `DigitalWire`, `IPFS`, `StackActivity`, `BuildConfigs`, `OpenApiDecorator`, `TransferHotspotV1`, `IIssueParms`, `RoutesWithContent`, `AxeResult`, `JsonSchema`, `BillingActions`, `DagOperator`, `RelationType`, `NucleusFile`, `DateCell`, `FormErrorMessageModuleConfig`, `BoolPriContext`, `MockStakingContract`, `ReturnValue`, `Binder`, `InStream`, `NzTabComponent`, `IPlug`, `LogisticsRequest`, `JRPCMiddleware`, `PerSideDistance`, `JSet`, `SelectableItem`, `MerchantGoodsService`, `GetModelTemplateCommandInput`, `DataflowAnalyzer`, `GetAllAccountsValidationResult`, `VdmParameter`, `ExtraGate`, `FaIconLibrary`, `ResourceConstant`, `ListOfPoints`, `ParamsFilter`, `QueryableFieldDescription`, `AlfrescoApiService`, `childProcess.ChildProcess`, `GetConnectionResponse`, `RowTransformCallback`, `SettingsNotify`, `OptsChartData`, `ExponentSpec`, `CinemaFrameType`, `DeleteMeetingCommandInput`, `ConnectionProfile`, `IZoweNodeType`, `FieldGroup`, `Vfs`, `OverrideContext`, `LineHeight`, `ITimeSlot`, `MotionInstanceBindings`, `BaseUIElement`, `BITBOX`, `PartialTransaction`, `Highcharts.AnnotationControllable`, `TextChange`, `C5`, `Requireable`, `ParseCxt`, `UserWhereInput`, `TermEnv`, `IntlType`, `ChangeAnnotation`, `ProviderOptions`, `SGArcItem`, `Verdaccio`, `ClientFactory`, `BaseEnvironment`, `AutoAcceptProof`, `ModalState`, `flags.Discriminated`, `LoaderConfOptions`, `IVue`, `EffectComposerComponent`, `coreClient.CompositeMapper`, `ProcessStatus`, `LibrarySearchQuery`, `InjectionService`, `CancellationToken`, `UIElement`, `NextResponse`, `ts.EntityName`, `HostRecord`, `DelayedRemovable`, `TextEditorEdit`, `TForwardOptions`, `PostTexMtx`, `ImportEqualsDeclaration`, `Versions`, `Brew`, `Algebra.PlanNode`, `GrafanaTheme`, `EndpointAuthorization`, `Support`, `ComponentCompilerMeta`, `CBService`, `TCacheResult`, `SVFunc`, `ICreateFormDialogState`, `ICollaborator`, `OpenSearchClientConfig`, `PhysicalModel`, `MDCTabIndicatorAdapter`, `HTTPAuthorizationHeader`, `azureBlobStorage.Container`, `ErrorBoundaryProps`, `HsdsId`, `TokenInterface`, `Json.Segment`, `ReadOnlyIterator`, `JSONAPIDocument`, `CrossTable`, `DirectivePosition`, `IpcCommandType`, `JsonPointer`, `Zoo`, `Point.PointLabelObject`, `ValueContainerProps`, `BTIData`, `IClient`, `SecretsService`, `AuthMode`, `GitFile`, `Neo4jService`, `IAzureQuickPickItem`, `RequestOption`, `ValueType`, `LineIndex`, `ecs.TaskDefinition`, `SfdxOrgInfoMap`, `Workbook`, `GfxVendorInfo`, `ObjectLayer`, `ProblemFileEntity`, `InternalTimeScalePoint`, `NFT721V2`, `WebviewPanelOnDidChangeViewStateEvent`, `gradient`, `ContextT`, `FakeHashProvider`, `Highcharts.VMLElement`, `IAggConfigs`, `IViewZoneChangeAccessor`, `IntrospectionNamedTypeRef`, `Rollup`, `Poll`, `VariableService`, `GBMinInstance`, `IndTexStage`, `PriceSpecInput`, `CdtTriangle`, `FlattenedProperty`, `OnSetOptionsProps`, `MapPolygon`, `PlayService`, `LogAnalyticsSourceLabelCondition`, `Donation`, `RhoContext`, `TransactionBuilderFactory`, `SendToAddressOptions`, `PGOrbitsDef`, `TupleTypeReference`, `LoaderContext`, `CalendarItem`, `CompoundSchema`, `DeleteComponentCommandInput`, `FieldStatsCommonRequestParams`, `TertiaryButtonProps`, `EntityDispatcherFactory`, `AccountId`, `XmlEmptyListsCommandInput`, `MountPoint`, `IInterval`, `DisjointRangeSet`, `ContractCallContext`, `NativeViewElementNode`, `UploadService`, `ThemeColors`, `PropTypesOf`, `PublicationViewConverter`, `PaletteMode`, `THREE.Ray`, `freedom.RTCPeerConnection.RTCPeerConnection`, `VariantOptionQualifier`, `StandardPrincipal`, `ISummaryTree`, `PropCombination`, `IAtomMdhd`, `MsgWithdrawLease`, `AnimGroupData_Draw`, `TextDocuments`, `MutableArrayLike`, `GBDialogStep`, `NormalRequest`, `AsyncMethodReturns`, `Undefinable`, `DataViewsContract`, `ClientState`, `DeleteStatus`, `TimelineProvider`, `AbstractNode`, `ReactFlowState`, `MyController`, `ActionMessage`, `Interfaces.ViewEventArguments`, `DebugSourceBreakpoint`, `FormattedString`, `IConnectionCredentialsQuickPickItem`, `AwsCallback`, `ZodEffects`, `Int8Array`, `DejaPopupConfig`, `OpenApi.Schema`, `AndroidProjectParams`, `SavedObjectsType`, `ColorStop`, `Supports`, `MultiTablePrettyfier`, `ObjectDoc`, `DGroup`, `PluginInstaller`, `ChangeFilter`, `ReactApolloRawPluginConfig`, `UserStoreAction`, `StrokeStyle`, `StorageObjectList`, `Jobs`, `MutableCategorizedArrayProperty`, `CurrentForm`, `PromiseEventResp`, `WebGLBuffer`, `WechatyVorpalConfig`, `DescribeScalingPoliciesCommandInput`, `AssociationAddress`, `FutureBoolean`, `GunRolls`, `ChannelMessage`, `SlotValue`, `Picker`, `MsgCloseDeployment`, `PlannerConfiguration`, `StaticdeployClient`, `ServiceDescription`, `SuggestionOperationType`, `WindowId`, `puppeteer.KeyInput`, `ProseMark`, `MapStateToPropsParam`, `Quantity.MANY`, `DeleteSlotTypeCommandInput`, `MockChannel`, `EventSourceMap`, `ConvertedToObjectType`, `BucketMetadata`, `UpdateStageCommandInput`, `ErrorNode`, `TrackedDocument`, `ConfigProvider`, `AsyncStream`, `Lens`, `NetworkgraphLayout`, `WorkerChild`, `MsgFromWorker`, `IFileAccessConfiguration`, `Claims`, `Changer`, `ServiceArgs`, `CVLanguageManager`, `RenderConfig`, `ConnectionManager`, `AuthenticationSessionsChangeEvent`, `AliasEventType`, `IsSpecificCellFn`, `ReducerList`, `RecipientMap`, `AV1Obu`, `ListPhoneNumbersCommandInput`, `GraphQLUnionType`, `OfflineAudioContext`, `ShaderDefine`, `TransientSymbol`, `ICliCommand`, `Marshaller`, `OverlayChildren`, `PrimitiveTypeAssertion`, `AuthUserContext`, `AttachmentID`, `MaybePromise`, `EditableSelection`, `Reference`, `TreeDecoration.Data`, `ValidatorProxy`, `UploadMetadata`, `CharCategory`, `GeometricElementProps`, `ServicesAccessor`, `UnionableType`, `ChangLogResult`, `BannerProps`, `HTTPResponse`, `MdcListItem`, `T11`, `CombineParams`, `RedirectionResponse`, `CommandRegistryImpl`, `PrismaClientErrorInfo`, `IRegisterItem`, `MatchCreate`, `NodeModel`, `ReindexSavedObject`, `PromoteReadReplicaDBClusterCommandInput`, `IDatabaseDataModel`, `BatchWriteCommandInput`, `PromiseLike`, `apid.LiveStreamOption`, `PGOrbit`, `FormEventHandler`, `SideNavItem`, `SqlVals`, `ILicenseState`, `InterceptedRequest`, `ObjectDetectorOptions`, `ChartCoordinate`, `GetNotificationsFeedCommand`, `ChainFunction`, `CreateRoomCommandInput`, `SubscriptionItem`, `AnInterface`, `binding_grammarListener`, `BigAmount`, `GetPublicKeyCommandInput`, `TargetData`, `Coordinate`, `ProjectDefinition`, `HalResourceConstructor`, `ImageClassifierOptions`, `SheetContextType`, `CustomCallbackArgs`, `ListObjectsResponse`, `tfc.serialization.ConfigDict`, `IPageRenderInstruction`, `ChildArenaNode`, `TopNavMenuProps`, `ComparisonOperand`, `ShelfFunction`, `Changelog`, `SaberProvider`, `PluginsServiceStartDeps`, `DebugProtocol.VariablesResponse`, `EmailHandler`, `DomainEndpointOptions`, `SolidityValueType`, `SelectReturnTypeOptions`, `DraggableProvided`, `QuadViewModel`, `ChampionsLeagueStat`, `MotionDataWithTimestamp`, `BlobId`, `Alg`, `NullableMappedPosition`, `ClassReflection`, `DependencyTree`, `AuthUser`, `IVideoFileDB`, `BucketInfo`, `CachingRule`, `ComputeImage`, `MappedDataSource`, `BeDuration`, `TaxonomicFilterGroupType`, `Examples`, `VerificationInput`, `DataPlanSObject`, `ValidationFlags`, `CurveCrossOutput`, `PredefinedGeneratorResolvers`, `NoteStorage`, `FakeSystem`, `Coin`, `ExternalEmitHelpers`, `CommitSelectionService`, `ModularPackageJson`, `byte`, `PageHeader`, `RemoteBreakpoint`, `MixedObject`, `Raycaster`, `SetupServerApi`, `ObjectDescription`, `ApiItemMetadata`, `Executor`, `ChildReference`, `GfxProgramDescriptorSimple`, `WaitContextImpl`, `AnyCallbackType`, `SendableMsg`, `HSD_LoadContext`, `RangeIterable`, `MicrosoftStorageStorageAccountsResources`, `EncoderOptionsBuilder`, `Point2D`, `JpegEmbedder`, `ContractMethod`, `CloudFrontHeaders`, `FetchDependencyGraph`, `u128`, `EnabledFeatureItem`, `INodeList`, `TileKeyEntry`, `ListParameters`, `RtcpSenderInfo`, `ReleaseAsset`, `Nullable`, `FormGroupField`, `LogError`, `AssignableObject`, `EggPlugin`, `TypescriptParser`, `WriterResource`, `CheckerOption`, `ApexVariable`, `MapExtent`, `ContinueNode`, `MergeConfig`, `APIConstructor`, `ShapeConstructor`, `SupervisionCCReport`, `ParticipantResult`, `NodeSpecOverride`, `LoanFactory2`, `Session.IOptions`, `Models.BlobMetadata`, `CanonicalOrder`, `IAggregationDataRow`, `Tracklist`, `XYPosition`, `FilterValues`, `ModelFitDatasetArgs`, `SqlParameter`, `ICellModel`, `ContentsXmlService`, `InvoiceService`, `core.Keyring`, `TProperty`, `GitHubPullRequest`, `flags.Kind`, `IProxySettings`, `RoadmapProps`, `SimpleRule`, `UnorderedQueryFlow`, `Stanza`, `HelpRequestArticle`, `AnnotationAnalyticsAggregation`, `DerivedAtomReader`, `ProgressInfo`, `TabPanelProps`, `ListChannelsCommandInput`, `ConnectionData`, `RenderRow`, `BlockOptions`, `SavedObjectFinderProps`, `UpdateApp`, `FileItem`, `AuthenticationModel`, `RegisterValue`, `MonitoringResources`, `Phone`, `DrawingGraph`, `IScoreCounter`, `Person`, `ApolloVoyagerContextProvider`, `ListExperimentsCommandInput`, `IPole`, `ProofFile`, `CreateTransactionOptions`, `SharedTree`, `RectangleShape2DSW`, `BundleRefs`, `RewardVaultItem`, `TestHostComponent`, `Inheritance`, `IDBVersionChangeEvent`, `options`, `Types.Authentication`, `TestExecutionInfo`, `TestInterval`, `Clipboard`, `GoldTokenInstance`, `EvalEnv`, `EsLintRule`, `SourceMetadata`, `requests.ListBlockVolumeReplicasRequest`, `TsActionCreator`, `UI5ClassesInXMLTagNameCompletion`, `Crisis`, `AsyncExecutor`, `ChromeNavLink`, `CardTitleProps`, `Atom.TextEditor`, `IZosmfIssueParms`, `NexusExtendTypeDef`, `ArianeeHttpClient`, `AxiosPromise`, `com.google.firebase.database.DataSnapshot`, `ABuffer`, `GraphQLGenie`, `Paths`, `BillingGroup`, `DragDataStore`, `MatrixProfileInfo`, `DeleteRuleCommandInput`, `Func1`, `ExpoWebGLRenderingContext`, `ApplyBuffEvent`, `SymbolLinks`, `Animator`, `IRunConfiguration`, `TagSet`, `TraderConfig`, `OperationStream`, `RoomFacade`, `CreateWalletFlow`, `InputListConfig`, `TradeComputed`, `ParsedImport`, `XrmUiTest`, `ExportService`, `MediaStream`, `WorkRootKind`, `RSAPrivateKey`, `NewId`, `Locals`, `SourceStatus`, `WrappedValue`, `MarketTicker`, `StandardPrincipalCV`, `IndexConfig`, `ResponderEvent`, `CustomFile`, `IPFSFile`, `InternalNamePath`, `IHeaderItem`, `UserFilter`, `IRowIndices`, `ast.LiteralNode`, `HTTPProvider`, `STExportOptions`, `ItemTypeNames`, `IValidateProjectOptions`, `WriteFileCallback`, `ChangeSetData`, `AlertsProvider`, `StructService`, `City`, `LangiumConfig`, `TimeDistributed`, `MML`, `XElementData`, `ModeledMarker`, `ILoadAll`, `TemplateParser`, `TComponentControls`, `DescribeRepositoryAssociationCommandInput`, `WantedTopics`, `ArticlesService`, `TSTypeAliasDeclaration`, `IndexStore`, `GlobalNode`, `DeleteResourcePolicyResponse`, `OutgoingRegistry`, `IZosFilesResponse`, `MatSnackBarContainer`, `EventHandlerType`, `BaseClosure`, `SecretUtils`, `ContainerImage`, `SolidDashedDottedWavy`, `MutableVector2d`, `SVGRenderer.ClipRectElement`, `BugState`, `TextAreaTextApi`, `MDCListFoundation`, `PropertyOperationSetting`, `UnpackAttrs`, `CognitoIdentityServiceProvider`, `ReferenceList`, `IBlobSuperNode`, `MyTargetProps`, `ReferenceResult`, `SourceSymbol`, `CountingData`, `ENDElement`, `DiscordToken`, `IMetricAggConfig`, `StateA`, `TSTypeParameter`, `ServiceBuild`, `InputSearchExpression`, `DbEmoji`, `ErrorBag`, `MatchingOptions`, `STSortMap`, `DeltaInsertOp`, `QueryServiceStartDependencies`, `MenuDataItem`, `RunTaskOption`, `ResponseObject`, `SuggestionFactory`, `WindowState`, `MainController`, `Interpolations`, `IKeycodeCategoryInfo`, `Equiv`, `ListBuildsCommandInput`, `THREE.ShaderMaterial`, `DS`, `LinkSteamRequest`, `NodePort`, `LoadOnDemandEvent`, `UpdateReplicationConfigurationCommandInput`, `PluginDomEvent`, `RunSuperFunction`, `PQLS.Library.ILibrary`, `Gauge`, `IAtomStsd`, `MultisigBuilder`, `MetricInterface`, `AllSelection`, `FormValidation`, `LoadRange`, `MutableCategorizedPrimitiveProperty`, `IRankingHeaderContext`, `IRenderingContext`, `ResolvedProjectReference`, `ComponentCompilerStaticProperty`, `GetServerSidePropsContext`, `ParquetCodecOptions`, `Entity.Notification`, `CallbackOptionallyAsync`, `ApiParams`, `TConfiguration`, `FrequentLicences`, `requests.ListHttpProbeResultsRequest`, `IEventEmitter`, `OptimizeJsOutput`, `AcNotification`, `ts.PropertyName`, `INodeParameters`, `BazelOptions`, `u16`, `NzCalendarHeaderComponent`, `PSTNodeInputStream`, `GitBranch`, `CSSBlocksConfiguration`, `TKey`, `ParserState`, `TransactionBeganPayload`, `Transform3D`, `Artist`, `SanitizerFn`, `PddlSyntaxNode`, `IViewPathData`, `RouteItem`, `LabwareDefinition2`, `DataSet`, `WasmResult`, `HTMLVmMenuRadioElement`, `TestFunctionImportMultipleParamsParameters`, `InstallMessage`, `messages.Hook`, `JSONLocationFunction`, `ContractMethodDescriptor`, `ReactTestRendererJSON`, `HelloWorldContainer`, `TypeElementBase`, `CommunicatorEntity`, `ApiResponse`, `StacksNode`, `V1CertificateSigningRequest`, `IObjectInspectorProps`, `CoreTracerBase`, `BalanceMap`, `SupportedPackageManagers`, `InflightKeyGenerator`, `TypeShape`, `TEAttribute`, `ListDiscoveredResourcesCommandInput`, `HsQueryVectorService`, `Scenario_t`, `JsonSchema7Type`, `IJsonStep`, `__String`, `HeroSelectors$`, `NormalizedPackageJson`, `IElementInfo`, `TaskEither.TaskEither`, `HTMLScStatusTimelineOverlayRowElement`, `StreamGraphNode`, `Dense`, `FiscalCode`, `UserInstance`, `IntrospectionObjectType`, `UpSetAddon`, `Imports`, `SeriesType`, `ContextMenu`, `q.TreeNode`, `squel.Select`, `GlobalizeConfig`, `GUIOrigin`, `IVideoApiModel`, `ProofDescriptor`, `ISharedFunctionCollection`, `Claim`, `ITranscriber`, `FunctionComponentElement`, `OpHandler`, `EventActionHandlerMeta`, `FakeComponent`, `IEvmRpc`, `MsgCreateCertificate`, `IStreamPropertiesObject`, `WikidataResponse`, `TSLet`, `BodyPixInput`, `PageViewComponent`, `DaffCartFactory`, `NedbDatastore`, `PeerConnection`, `V1Role`, `Summary`, `ISceneView`, `ImageFormatTypes.JPG`, `ScmResource`, `SweetAlertResult`, `JEdge`, `OSCMessage`, `EncodedTransaction`, `ResourceProvider`, `TError`, `Jwt`, `NodeClass`, `PageEvent`, `Context`, `TypingVersion`, `BuilderContext`, `BorderConfig`, `Realm.ObjectSchema`, `SerializeSuccess`, `HashMapStructure`, `CanvasModel`, `SmartContractPayload`, `InternalOptions`, `NewFOS`, `SlpRealTime`, `PointCompositionOptions`, `SeriesDoc`, `EggAppConfig`, `TestSerializer`, `SystemVerilogParser.SystemVerilogContainerInfo`, `IResolverObject`, `SavedObjectsSerializer`, `WebhookEvent`, `ExecuteCommandState`, `DeprovisionByoipCidrCommandInput`, `ApiErrorService`, `AddressString`, `IStackItemStyles`, `CanvasBorderRadius`, `BehaviorHost`, `AgentMessage`, `Equivalence`, `TestSpec`, `Solver`, `Users`, `NotificationChannel`, `F2`, `TOCHeader`, `ApolloClientOptions`, `TemplateLiteralTypeSpan`, `GasOptionConfig`, `RelativeDateRange`, `TsConfigLoaderResult`, `SModelElementSchema`, `SV`, `OgmaService`, `NumberNode`, `RefType`, `ServiceEnvironmentEndPointOverview`, `util.TestRunError`, `ComponentResolver`, `ICredentialsResponse`, `EditorChange`, `AugmentedActionContext`, `StubbedInstance`, `ValueHolder`, `ResolvedConfig`, `TestEntity`, `SystemVerilogSymbol`, `ApiSchema`, `SpectatorHostFactory`, `ServerApi`, `AnnotationTypeOptions`, `HostWatchFile`, `ConnectionProperty`, `SyncToolSettingsPropertiesEventArgs`, `PointCloudOctreeGeometryNode`, `FabricEnvironmentRegistryEntry`, `LoadedVertexData`, `Vector3Like`, `ConnectionDictionary`, `ArrayContext`, `ClientChannel`, `Combatant`, `JsonArray`, `TestTreeHierarchyNode`, `WriteRequest`, `Basic`, `K.LiteralKind`, `OrchestrationClientInputData`, `VAceEditorInstance`, `ArrayBufferView`, `KeyRegistrationBuilder`, `BreakpointState`, `TDestination`, `UIButton`, `RouteManifest`, `EffectVblDecl`, `WrappedWebGLProgram`, `SimpleComparator`, `ColorDef`, `CustomDate`, `ResourceMetadata`, `Cypress.cy`, `CommandlineOption`, `SCanvas`, `IDomainEvent`, `DocsLibrary`, `Tsconfig`, `FieldVisConfig`, `IdeaDocument`, `UserDevice`, `ts.ParseConfigHost`, `CipherBulkDeleteRequest`, `Foobar`, `VAIndent`, `IHsv`, `TransportParameters`, `LocalButtonProps`, `ContainerSiteConfig`, `IFormData`, `RoundingModesType`, `GfxChannelBlendState`, `ServerSettings`, `DeleteFilterCommandInput`, `CompilationData`, `GenericNumberType`, `AlphaDropout`, `Company`, `ResponseBuilder`, `Quadratic`, `MetaesException`, `DefaultIdentity`, `WarningPrettyPrinter`, `DbCall`, `InputState`, `WalletInterface`, `StateDiff`, `TweenMax`, `OperationCallback`, `MessageButton`, `requests.ListIamWorkRequestsRequest`, `ErrorType`, `WorkflowStepInput`, `MarkerScene`, `AlignConstraint`, `MerchantMenuOrderGoodsEntity`, `PostMessageService`, `SelectableState`, `SparseSetProps`, `CreateContextReturn`, `ImportCertificateCommandInput`, `IGenericTarget`, `JavaScriptEmbedder`, `AwaitedCommandEntry`, `UI5Config`, `GroupInfo`, `KeyMacroAction`, `TestViewport`, `EntryPoint`, `ComparisonKind`, `ActionObject`, `DiffSettings`, `AsyncSubscription`, `PageItem`, `OrderableEdmTypeField`, `TDiscord.Message`, `IAssetPreviewProps`, `RouterLocation`, `GetActionTypeParams`, `Handler`, `SLL`, `StreamConfig`, `types.Span`, `NeisCrawler`, `OmitFuncProps`, `TimeOffService`, `Like`, `TracklistActions`, `DaffCategoryFilterEqualFactory`, `HDWalletInfo`, `CompilerModeStyles`, `SubMeshStaticBatch`, `LoadContext`, `GameData`, `DeleteDomainCommandOutput`, `requests.ListFastConnectProviderServicesRequest`, `RolloutTracker`, `ParseErrorLevel`, `SignatureInfo`, `ObjectLiteralExpr`, `ComponentProp`, `IAuthenticationService`, `d.DevServerConfig`, `OTRRecipients`, `SpyPendingExpectation`, `SynState`, `FX`, `RippleAPI`, `RelatedClassInfoJSON`, `TreePath`, `ToolbarWrapper`, `CollectionView`, `_m0.Writer`, `IRunExecutionData`, `GeneralInfo`, `IRestResponse`, `VariableState`, `t_3b6b23ae`, `CheerioStatic`, `MasterNodeRegTestContainer`, `ITableParseResult`, `Highcharts.AnnotationControlPoint`, `TResource`, `MarkupContent`, `NgZonePrivate`, `MatButtonToggleChange`, `ChannelOptions`, `IndicatorsData`, `GfxRendererLayer`, `NodeSourceType`, `IAppSettingsClient`, `ColorSwitchCCReport`, `ICtrl`, `JupyterFrontEndPlugin`, `HydrateStaticData`, `MapReward`, `HookType`, `StructServiceOptions`, `ListRevisionAssetsCommandInput`, `ARPlane`, `FamilyPage`, `SearchkitClient`, `TestUiItemsProvider`, `OpticFn`, `StatusBarService`, `IconProps`, `TreeNodeService`, `ShHeap`, `NftMeta`, `DeviceManagerImpl`, `LeakyReLULayerArgs`, `JointTransformInfo`, `ContextAccessor`, `NodeState`, `IProjectInformation`, `CSVMappingParameters`, `ILink`, `FilterPredicate`, `IRuntimeFactory`, `OrbitCameraController`, `MultilevelSwitchCCSet`, `execa.ExecaReturnValue`, `ModuleFormat`, `monaco.Uri`, `FlagType`, `CheckBuilder`, `ViewFactory`, `BrowserFetcherRevisionInfo`, `ReplicationRule`, `CombinedThingType`, `BSplineCurve3d`, `ScalePower`, `BaselineInfo`, `HttpHeaders`, `SendFunc`, `URLLoaderEvent`, `Cell`, `ViewStyle`, `PaginationProps`, `ParallelWorkflow`, `SecondLayerHandlerProcessor`, `MBusForm`, `WalletVersion`, `AngularDirective`, `Capacity`, `IUIMethod`, `IArgDef`, `RSPState`, `ColorPicker`, `Seq`, `StateReceipt`, `i0.ɵViewDefinition`, `System_Object`, `StringOrTag`, `TaskOption`, `CodeBuildAction`, `ParticleEmitter2`, `TrackProp`, `WindowCorrection`, `KeyboardEvent`, `requests.ListTagNamespacesRequest`, `EditorRenderProps`, `ISetCombinations`, `ConceptResponse`, `LexicalEnvironment`, `ToastComponent`, `VersionBag`, `ContributionRewardSpecifiedRedemptionParams`, `Offset`, `GenericWatchpoint`, `QueryDslQueryContainer`, `ComputedUserReserve`, `CAPIContent`, `EventEmit`, `GridIndicator`, `OutcomeType`, `World`, `AggFilter`, `CreateProfile`, `ThyPlacement`, `AddOutputRequest`, `files.SourceDir`, `TryNode`, `NamedBounds`, `IUpSetStaticDump`, `Named`, `FlowPreFinallyGate`, `RegTestContainer`, `VolumeType`, `ImageDataBase`, `TwingTemplateBlocksMap`, `IPresentationTreeDataProvider`, `DataAsset`, `FireLoopData`, `QuestionAdornerViewModel`, `tfl.LayersModel`, `Jsonable`, `Invocation`, `TestRelation`, `IRCMessageData`, `DataTypeFieldAndChildren`, `SourceStorage`, `MagickOutputFile`, `SocketMeta`, `LiftedState`, `Members`, `AsteriskToken`, `SearchCommand`, `ItemIdToExpandedRowMap`, `ReplFs`, `TimeHolder`, `SurveyModel`, `IStopsProvider`, `PrepareQuery`, `BinaryLike`, `Http3QPackEncoder`, `IIssuerConfig`, `CreateFlowCommandInput`, `TripleIds`, `BaseIndexPatternColumn`, `HdRipplePayments`, `ReadModelEnvelope`, `BasicResumeNode`, `IUIDefine`, `ColorResult`, `SourceDescriptionChunk`, `EmployeeAppointmentService`, `model.InstanceOf`, `GX.TexMapID`, `requests.ListInstancePoolsRequest`, `GmailMsg`, `NSSet`, `FlowItemAssign`, `TwingLoaderInterface`, `DynamicCommandLineParser`, `QuizLetter`, `TextBufferObject`, `WarehouseService`, `unicode.UnicodeRangeTable`, `Terms`, `RecordedTag`, `Coordinates`, `BinderFindBindedPositionRet`, `ts.VariableDeclaration`, `UnitMusteringRule`, `BaseN`, `Modifier`, `CreateMasternode`, `BoxListEntity`, `ICustomViewStyle`, `IGroupTemplate`, `SavedObjectMigrationMap`, `RuleWithCnt`, `AccountServiceProxy`, `FieldApi`, `DebugCallback`, `ImageTransformation`, `SurveyPDF`, `UAVariable`, `ITccSettings`, `InternalData`, `RnM2Primitive`, `LineCaps`, `TimeConfig`, `ServerMethods`, `JSONRoot`, `TransferTransition`, `TimePicker`, `TestFileSystem`, `SaleorClient`, `StyleSet`, `ContentType1524199022084`, `MachineInfo`, `AsyncUnorderedQueryFlow`, `Matchers`, `TabApi`, `UpdateRoomMetadataRequest`, `ISharingResponse`, `AnyToken`, `IMyDateRange`, `InstructionType`, `StockSandbox`, `BaseAttribute`, `EventLog`, `PaginationPayload`, `ResolveTree`, `ProcessInstanceTableEntry`, `FileResponse`, `OpenTarget`, `Tweet`, `PlayerInput`, `StatusController`, `EthersProvider`, `providers.BlockTag`, `IModalService`, `CustomStore`, `AlertInputOptions`, `Drag`, `ScanDetails`, `DeleteScalingPolicyCommandInput`, `HashData`, `LogLevels`, `DeleteDBClusterEndpointCommandInput`, `IAnswers`, `MALEntry`, `DiffLayouterFactory`, `SuggestionItem`, `Highcharts.NetworkgraphPoint`, `OptionalVersionedTextDocumentIdentifier`, `Aggregate`, `CiaoService`, `MDCChipAction`, `Search`, `MainAccessRequest`, `NetworkTargetGroup`, `Dep`, `InputBlock`, `UnsubscribeMethod`, `Window`, `CalendarInput`, `CommandRelay`, `TGraphQLContext`, `TableSchemaSpec`, `TypeScriptDeclarationBlock`, `IntTerm`, `QueryBuilderProps`, `AccountGoogle_VarsEntry`, `ng.IQService`, `IOnValidateFormResult`, `MetricsResults`, `CallParams`, `CurveColumnSeries`, `PlaybackRate`, `CSSTemplate`, `AwaitExpression`, `PageRequest`, `DefaultReconnectionHandler`, `DefaultChildrenWNodeFactory`, `ISchemaCollector`, `CssNodeType`, `ProgressList`, `ViewContext`, `Investor`, `WinState`, `TooManyTagsException`, `ConvertedDocumentUrl`, `BrowseCloudDocumentWithJobs`, `BlockOutputFormat`, `RecordObject`, `TimelineGridWrapper`, `GfxReadbackP_GL`, `HtmlContextTypeConvert`, `IAuthContextData`, `CompressedId64Set`, `AnyEvent`, `Aliases`, `ParametricRegExp`, `TrustIdHf`, `IServerSideGetRowsRequest`, `ScrollPosition`, `ApplicationDefinition`, `ShortUrl`, `CallingBaseSelectorProps`, `EncryptionLevel`, `DebugCurve`, `PayableOverrides`, `Triple`, `CreateSavedObjectsResult`, `ProposeMessage`, `Grouping`, `RibbonEmitterWrapper`, `ProblemViewPanel`, `ViewColumn`, `d.SerializeImportData`, `IBlobINode`, `EmittedMessage`, `Polyline`, `d.FunctionalComponent`, `GroupMetadata`, `DefaultRes`, `FleetMetricSummaryDefinition`, `ChangeAccumulator`, `DigitalObjectSet`, `ProductMap`, `MinimalNodeEntity`, `SMTDestructorGenCode`, `ttm.MockTestRunner`, `ChangesetGenerationHarness`, `Addon`, `LayoutPartialState`, `VisHelpTextProps`, `CasesClientInternal`, `AuxChannel`, `THREE.Line3`, `Package.Target`, `ts.ClassElement`, `NormalizedPath`, `CancelablePromise`, `RequestValues`, `RegistrationPage`, `BaseInteractionManager`, `msRest.OperationSpec`, `GeomEdge`, `AuthenticationProgramStateBCH`, `LogMeta`, `_1.EventTargetLike.HasEventTargetAddRemove.Options`, `grpc.Code`, `ValidationController`, `WeightsManifestEntry`, `NetworkInfoStore`, `LocalMigration`, `IBlock`, `RenderCallback`, `ListWorkspacesCommandInput`, `Main`, `InputContext`, `StringNote`, `ListArtifactsCommandInput`, `MockedOptions`, `Describe`, `SIGN_TYPE`, `NumOrElement`, `ExecutionConfig`, `MigrateFunction`, `Git.IStatusFile`, `ResponsivePartialState`, `FetchEvent`, `SDPCandidateType`, `jest.CustomMatcherResult`, `IValidator`, `CodeMaker`, `AppConfiguration`, `$mol_atom2`, `LoginDTO`, `WithGeneric`, `ISerializedRequest`, `AnyError`, `IFragment`, `DatasourceRefType`, `PageImportExportTask`, `Symbols`, `TodoAction`, `LegacyReputationToken`, `AnimalType`, `SolveType`, `JPAFieldBlock`, `CircleObject`, `Char`, `ActionStatusResolverService`, `ISignerProvider`, `AccountFixture`, `ModelDefinition`, `CandidateInterviewService`, `Vuetify`, `LambdaOutput`, `Knex.CreateTableBuilder`, `ScanDb`, `ActiveTaskExtended`, `AngularExternalStyles`, `LineConfig`, `Story`, `FooterComponent`, `OscType`, `ModulusPoly`, `NewSpecPageOptions`, `RepoData`, `ActionListItem`, `ReactiveObject`, `Coord`, `RTCPeerConnectionIceEvent`, `RequestApprovalTeam`, `PlatformAccessory`, `_BinaryWriter`, `RoomObject`, `StackItemType`, `CucumberQuery`, `DeleteConnectionRequest`, `ImportIrecDeviceDTO`, `SeriesTypeOptions`, `RoomPayload`, `GitHubApi`, `MatchExpression`, `SpectatorServiceFactory`, `PerQuadrant`, `ExpShape`, `SecurityCCCommandsSupportedReport`, `MapToType`, `IFolder`, `TBookAuthorMapping`, `TestFixtureComponent`, `CompositionItem`, `ExportJob`, `CanvasSide`, `Nodes.Node`, `SimpleCondition`, `MediaWiki`, `_1.Operator.fλ.Stateless`, `MutationObserver`, `JSXElement`, `MdcSelect`, `ManifestCacheChangedEvent`, `TikTokConstructor`, `FlattenSimpleInterpolation`, `Rule.RuleModule`, `BasicTarget`, `HeftSession`, `FlameGraphNode`, `RemoveArrayControlAction`, `QueryAllProvidersAttributesRequest`, `ImportService`, `Methods`, `uint16`, `DescribeEventSubscriptionsCommandInput`, `p5.Color`, `requests.ListManagedInstanceErrataRequest`, `EngineType`, `IlmPolicyMigrationStatus`, `MultiFileRenderResult`, `VertexBuffer3D`, `ActionEffectPayload`, `LavalinkNode`, `UseFormReset`, `LyricFont`, `IConfigOptions`, `PlaceholderComponent`, `YaksokRoot`, `CollisionPartsFilterFunc`, `NetworkPolicy`, `MessageInterface`, `IOHandler`, `PortSet`, `IConfigurationModify`, `NavigationEdgeStatus`, `ExperimentPhase`, `StateInvocationParams`, `ComponentModel`, `Common.ILog`, `DOMElement`, `Octokit`, `BaseChannel`, `CodePoint`, `ReturnTypeFuncValue`, `TransactionFormSharedState`, `CancelParameters`, `IClock`, `IDocumentWidget`, `RootObject`, `Selectable`, `GetActionParams`, `SelectionDirection`, `ClusterRoleBinding`, `IButtonProps`, `BarcodeScannerOperationParams`, `IBundleWithoutAssetsContent`, `RsRefForwardingComponent`, `DateSkeleton`, `SessionManager`, `CT`, `ProofItem`, `YoganOptions`, `ContentTypeReader`, `SendMessage`, `ModelBuilder`, `Disposer`, `TRawComponent`, `IScript`, `IndoorMap`, `UpdateResult`, `ApolloReactHoc.OperationOption`, `AnimationGroup`, `ConvectorControllerClient`, `MatchModel`, `A8k`, `Optional`, `ElasticsearchResponseHit`, `PostConditionMode`, `CellData`, `ParsedTranslation`, `WithSubGenericInverted`, `ScreenDetailDto`, `ScreenshotConnectorOptions`, `PackageJsonFile`, `OverlayStart`, `DiskOptions`, `MockLink`, `postcss.LazyResult`, `data`, `Testing`, `UserPresence`, `JsonStringifierContext`, `TokenizerConfig`, `HostSettings`, `ValidTimeoutID`, `Exceptions`, `FIRVisionImage`, `CrudRepositoryCtor`, `JsonConfig`, `GrowableFloat64Array`, `IStorageProvider`, `StrategyOrStrategies`, `EditorSchema`, `WorldService`, `MutateResult`, `ChipDirective`, `ts.TypeAliasDeclaration`, `WorkspaceStructure`, `CompactInt`, `PropIndex`, `StringAtom`, `Tsa.SourceFile`, `CliOptions`, `QueryConditionOptions`, `ResultMapper`, `PedAppearance`, `ScrollView`, `Swiper`, `IExtentStore`, `AError`, `AnimatorControllerLayer`, `IInterceptor`, `SearchInWorkspaceRootFolderNode`, `EncodedDeviceType`, `CfnPolicy`, `NormalizedExtension`, `requests.ListLimitDefinitionsRequest`, `AntiVirusSoftware`, `UpdateUserSettingsCommandInput`, `SystemVerilogExportInfo`, `UseLazyQueryState`, `JMapIdInfo`, `R1`, `IRawStyle`, `Pathfinder`, `ProjectConfig`, `TestcaseType`, `GetMyOrganizationCommand`, `AppService`, `vscode.TreeItemCollapsibleState`, `Toolkit.IPluginExports`, `KeyPhrase`, `SkipListNode`, `SnsMetricChange`, `Actions`, `AddToCaseActionProps`, `Prose2Mdast_NodeMap_Presets`, `IChatMessage`, `TextCanvas`, `IDirective`, `CharacterMetadata`, `IRoot`, `SubcodeLine`, `ExceptionBlock`, `NDArray`, `TaskExecutor`, `Visualization`, `UrlTemplate`, `StateStorageService`, `DestructuringAssignment`, `SkipListSet`, `TypeDBOptions`, `DocumentLinkParams`, `STPPaymentHandlerActionStatus`, `XmlAttributes`, `RolandV60HDConfiguration`, `Export.DefaultInterface`, `QueryParams`, `ActorId`, `FilterFor`, `ShowModalOptions`, `DescribeSourceServersCommandInput`, `TConvData`, `InterceptorFn`, `RelationsInputType`, `RectLike`, `KeyStore`, `vscode.ViewColumn`, `MethodHandler`, `UnpackNode`, `LogAnalyticsParser`, `TelegramBot.Chat`, `MenuCardProps`, `CityBuilderStore`, `vile.YMLConfig`, `DevToolsNode`, `ISnapshotTreeEx`, `OrdersService`, `React.LegacyRef`, `Run`, `VnetGateway`, `TokenStorage`, `CategorySegment`, `d.CompilerFileWatcherEvent`, `BaseToken`, `RetryConfigurationDetails`, `StatusResponse`, `CardContextOptions`, `MarkEncoding`, `AbortController`, `FileQuickPickItem`, `AngleFromTo`, `Stitches.PropertyValue`, `AllState`, `NormalizedMessage`, `SymbolInfo`, `Releaser`, `OutputLocation`, `MessageTimer`, `Marker`, `MouseState`, `aws.S3`, `ArcTransactionResult`, `StringLiteralLike`, `IRegistryInfo`, `TSESTree.Identifier`, `BitcoinCashAddressFormat`, `VanessaTabs`, `ODataServiceFactory`, `UseMutationResponse`, `MyEditor`, `RequestInput`, `GroupChannel`, `AttrRewriteMap`, `CounterAction`, `ContractWrapperFactory`, `PathFilter`, `Skeleton_t`, `PatternLiteralNode`, `ExpressionsServiceSetup`, `RepoFrontend`, `AutoTranslateServiceAPI`, `PartType`, `MockContractFactory`, `PostConditionPrincipal`, `GraphEdges`, `NotificationService`, `PedigreeConstraint`, `ICategoricalStatistics`, `RuntimeCacheInterface`, `GlobalAction`, `IProcess`, `DisassociateFromMasterAccountCommandInput`, `apid.RuleId`, `DocsBinding`, `AnalysisOptions`, `BasePeerType`, `InjectableDependency`, `VcsFileChange`, `IMaterialAttributeOptions`, `DocumentInfo`, `RAFirebaseOptions`, `FilterCriteria`, `Facet`, `PromoteGroupUsersRequest`, `ConvLSTM2DArgs`, `CausalObjectStore`, `Z64LibSupportedGames`, `TextDiff`, `CinemaHallSeat`, `DateTimeFormatOptions`, `RestSession`, `ConvWithBatchNorm`, `MerkleTreeNode`, `RouteInfoWithAttributes`, `ENDProgram`, `UpperMainBelow`, `NodeFileSystem`, `IToaster`, `ts.BinaryExpression`, `QueueServiceClient`, `FieldData`, `TelemetryOptions`, `FromToWithPayport`, `PluginLoaderService`, `ParameterDesc`, `ValidResponses`, `DecodedData`, `TypeReferenceNode`, `PropertySignatureStructure`, `IHostedZone`, `SupportedLocale`, `WorkflowStep`, `Application`, `Arpeggiate`, `KernelFunc`, `UserCredentials`, `ShaderInstance`, `VectorSourceRequestMeta`, `JitMethodInfo`, `MESSAGE_ACTIONS`, `ArtifactDownloadTicket`, `LoginFieldContainer`, `TabbedAggRow`, `BezierSeg`, `SymString`, `Fog`, `NucleusChannel`, `DiffSelection`, `DocController`, `DocumentFilter`, `EditorController`, `StatusBarItem`, `DimensionRecord`, `ThemeOptions`, `BindingOrAssignmentElement`, `WalletAccount`, `DeployHelpers`, `TimelineState`, `Module`, `Vp9RtpPayload`, `StyleType`, `UIInterfaceOrientation`, `CustomRenderElementProps`, `MutableMatrix22`, `NotificationProps`, `JQueryDeferred`, `CreateChannelMessage`, `NotebookDocument`, `Modifiers`, `NodeJS.Platform`, `MockStorage`, `Audio`, `TaskRunnerFactoryInitializerParamsType`, `IDirectory`, `ColorInformation`, `UserPreferencesService`, `cc.Vec3`, `PoisonPayload`, `PrimitiveShape`, `IParseInstruction`, `SubcodeWidget`, `ResolveType`, `TabId`, `QueryFlag`, `CSSState`, `MActorLight`, `IGarbageCollectionData`, `HapticOptions`, `RouterStore`, `ParamData`, `AnimatorClassSettings`, `requests.ListCustomProtectionRulesRequest`, `AccountFacebookInstantGame_VarsEntry`, `Dockerode`, `SlotId`, `IYamlApiFile`, `MatCheckboxChange`, `VisibilityType`, `SendCommandOptions`, `BaseDirective`, `ButtonToolConfig`, `React.BaseSyntheticEvent`, `MessageOrCCLogEntry`, `StyleStore`, `IStageManifest`, `XY`, `ThingMetaRecord`, `AllowedNetworks`, `BaseVerifiableClaim`, `ITimeToSampleToken`, `OutdatedDocumentsSearchRead`, `PackageManagerCommands`, `SingleConfig`, `UseAsyncReturn`, `BaselineOptions`, `CreateApp`, `CertificateAuthorityRule`, `SonarQubeConfig`, `TabStorageOptions`, `TypeAttributeMap`, `CustomResponse`, `IPanel`, `GestureStateEventData`, `PackedTrie`, `PolyPoint`, `Compressor`, `DaffQueuedApollo`, `VideoObject`, `TerminalNode`, `ListenForCb`, `Writable`, `WorkTree`, `ChatPlugService`, `LiveObject`, `CardManifest`, `i32`, `IGetMembersStatistics`, `FileHashCache`, `Sorter`, `MapControlsUI`, `MsgUpdateDeployment`, `HubInfo`, `CategoryRecordsDict`, `CreateComponent`, `FormatProvider`, `ResultProps`, `ActivityStatus`, `CreateFileSystemCommandInput`, `WebpackType`, `WishListRoll`, `CliProxyAgent`, `FastFormContainerComponent`, `TypeChange`, `MatSnackBar`, `IClassicListenerDescription`, `FcEdge`, `AnnotationData`, `AFSReference`, `LocationData`, `IndTexMtx`, `ConditionalArg`, `Domains`, `FloorCode`, `DictionarySchema`, `CacheSnapshot`, `BottomNavigationViewType`, `ApolloServerPlugin`, `DeleteIPSetCommandInput`, `PresenceSync`, `CoreCompiler`, `Tnumber`, `SalesOrderState`, `RESTResponseDataType`, `TransliterationFlashcardFields`, `Stem`, `AccountKey`, `SlotAst`, `TaggedProsemirrorNode`, `wdpromise.Promise`, `Manifest`, `GroupHoldr`, `Electron.MenuItem`, `ODataStructuredType`, `DeserializeOptions`, `CardId`, `Ingress`, `QuadrantRow`, `DynamicMatcher`, `GridSize`, `VariableRegistry`, `CertificateSummary`, `OptionValues`, `ClientDetails`, `DtlsPlaintext`, `DeserializeWireOptions`, `GetCellValueFn`, `RootData`, `AFSQuery`, `ScopeManager`, `WalletInit`, `IHttpProvider`, `GfxTextureP_GL`, `Mass`, `YjsEditor`, `JPA.JPABaseEmitter`, `SearchResultsArtist`, `Pointer`, `ComponentConstructor`, `ILogger`, `_IType`, `VDocumentFragment`, `PyJsonValue`, `WalletAdapter`, `ResultError`, `Mars.TransactionOverrides`, `CommonLanguageClient`, `TestSettings`, `SVGCircleElement`, `ClassGenerics`, `TabsModel`, `OperatingSystem`, `LoggedInUser`, `Charset`, `ContractAbiDefinition`, `CircleDatum`, `SpecFiles`, `PlantProps`, `WidgetObject`, `SignedTx`, `SqlFragment`, `MiBrushAttrs`, `d.ComponentCompilerPropertyType`, `CodeNameDTO`, `ResolvedGlobalId`, `InternalPlugin`, `ISnapshotProcessor`, `AlertInstanceState`, `WebApi.JsonPatchDocument`, `GbBackendHttpService`, `PermissionResponse`, `ArchDescr`, `GeneratorSourceConfig`, `SRoutingHandle`, `InvalidNextTokenException`, `CreateCertificateDetails`, `ICommandResponse`, `UserInfoData`, `TokenSmartContract`, `TProduct`, `protocol.FileLocationRequestArgs`, `AlignItems`, `JSBI`, `StreamParam`, `EffectRef`, `RtkRequest`, `SchemaOptions`, `PBRStandardMaterial`, `RenderObject`, `ConstructorAst`, `PutDedicatedIpInPoolCommandInput`, `PlayerSubscription`, `ExpressionValue`, `DeleteWebhookCommandInput`, `React.ChangeEvent`, `BindingKey`, `TriangleFilterFunc`, `CryptoContext`, `basic.Token`, `RemoteDataBuildService`, `MessageAttributes`, `PossibleSelections`, `WalletOrAddress`, `VisibilityNotifier2D`, `OverridedSlateBuilders`, `ScopedPlannerConfiguration`, `NotificationRequestInput`, `ContextMessageUpdate`, `CreateAppInstanceAdminCommandInput`, `ConvertedRemoteConfig`, `MetricsServiceSetup`, `MemoizedSelector`, `IOptionalIType`, `ReferenceUsedBy`, `BoundsData`, `SavingsService`, `TargetProperty`, `Feed`, `ConstraintService`, `ANIME_DICT`, `Readonly`, `FinalConfig`, `ErrorProps`, `SVGPathFn`, `OrganizationPoint`, `StyledComponent`, `RGBColor`, `JSDocsType`, `ListFileStatResult`, `UploadFileOptions`, `DisplacementRange`, `WalletModule`, `KeyframeInfo`, `StructureSpawn`, `CharStream`, `IVocabulary`, `AnnotationType`, `TRight`, `MFAPurpose`, `NoteSize`, `CollectionReference`, `DeclarationReflection`, `BigInteger`, `ISetCombination`, `IndicesService`, `QueryRunner`, `WorkflowState`, `CallHierarchyPrepareParams`, `ANodeStmList`, `RollupConfigurationBuilder`, `ExecaError`, `LanguageServer`, `CursorContent`, `MetricOptions`, `GUIDriverOptions`, `ConvertionResult`, `EditMode`, `XYChart`, `LocationId`, `DaffOrder`, `ViewState`, `WithKeyGeneric`, `vscode.TextEditorEdit`, `LayerProps`, `ApprovalRuleTemplate`, `GPUPipelineLayout`, `ExchangeParams`, `ConnectedOverlayPositionChange`, `IServiceLocator`, `OrgMember`, `WebsocketMessage`, `DaffAccountRegistration`, `DefaultReconnectDisplay`, `VideoDialog`, `RemoteObject`, `IReceiveParams`, `MXMirrorObjMethodCall`, `RouteResult`, `Amounts`, `PageMaker`, `LoadmoreFlatNode`, `HtmlParser`, `ts.Scanner`, `HashMap.Instance`, `Escape`, `H`, `SemanticType`, `Unionized`, `Numbers`, `ScaffoldType.Local`, `DatePickerValue`, `DocumentChange`, `ServerEventEmitter`, `d3Request.Request`, `JSX.IntrinsicAttributes`, `IntrospectFn`, `UnitBase`, `BTCSignedTx`, `Codebase`, `AppManifest`, `BasicKeyframedTrack`, `SignalMutation`, `Control3D`, `CreateTableNode`, `ConcatenateLayerArgs`, `GX_Material.GXMaterial`, `Redis`, `IBufferService`, `fromSettingsActions.UpdateSettingModel`, `BaseClusterConfig`, `NormalizedConfigurationCCAPISetOptions`, `IPluginAPI`, `Reddit`, `AudioInterface`, `ConsoleContext`, `RenderFunction`, `VcsInfo`, `GuildMember`, `InferredFormState`, `ConfigData`, `Estimate`, `Wildcard`, `HistoryViewContext`, `TaskChecker`, `VRDisplay`, `MDCMenuAdapter`, `Cutline`, `FormDataEntryValue`, `PackageRegistryEntry`, `SchemaProperty`, `Layers`, `Events.predraw`, `AdjacentList`, `TsTabCollectionComponent`, `PaginatedSearchOptions`, `LocalVueType`, `BuildPipelineVisFunction`, `PageHeaderProps`, `Pact`, `CircleEditOptions`, `AppenderConfigType`, `WKWebView`, `ThrottleOptions`, `That`, `Models`, `PreloadedState`, `io.SaveConfig`, `EntityStatus`, `FunctionFactory`, `DefaultApp`, `FindManyOptions`, `TurnClient`, `IModalContent`, `Events.initialize`, `MappingFn`, `FieldHierarchyRecord`, `JSXMemberExpression`, `IPropertyIdentValueDescriptor`, `MsgCloseLease`, `Permute`, `SymbolCategory`, `cwrapSignature`, `ImGui.U32`, `MockRequestParams`, `TSType`, `HashValue`, `nodes.Stylesheet`, `SparseMerkleTree`, `DeleteDatasetGroupCommandInput`, `SlotDefaultValue`, `TypedEventFilter`, `SocketOptions`, `MockPort`, `AsyncActionType`, `IEndExpectation`, `ObjectSet`, `WithGetterString`, `ts.Identifier`, `LogCallbackType`, `VersionMismatchFinderEntity`, `OutlineSymbolInformationNode`, `IDinoProperties`, `GridsterItemComponentInterface`, `IDatabaseDataActionClass`, `TagInformation`, `Opt`, `Poly`, `GraphRecord`, `GameModule`, `ImageryMapExtentPolygon`, `RuleId`, `DownwriteUIState`, `Components`, `ICheckAnalysisResult`, `PrincipalCV`, `WorkflowStateType`, `GridChildComponentProps`, `ConfigAccumulator`, `GraphQLRequest`, `EntityToFix`, `StorageMigrationToWebExtension`, `Responses.IViewContentItemResponse`, `primitive`, `TaroElement`, `ITsconfig`, `OpenDateRange`, `protocol.Location`, `Optimization`, `RtfDestination`, `Register64`, `GX.LogicOp`, `IntrospectionType`, `AuthorizationError`, `IOpenSearchSearchResponse`, `ResultNode`, `AttendanceStatusType`, `AccentColor`, `NumericNode`, `DynamoDB.DeleteItemInput`, `IFilters`, `SSAState`, `SessionStorage`, `S3.Types.PutObjectRequest`, `SettingOption`, `GroupSpec`, `OpenObject`, `Plan`, `TransportContext`, `CustomDialogOptions`, `WorkerContext`, `LocalState`, `EventSpy`, `ValueFormField`, `SymShape`, `BabelTarget`, `SectionsService`, `ExternalStyleCompiler`, `StringShape`, `TEvent`, `AntVSpec`, `requests.ListDbCredentialsRequest`, `RedAgateElement`, `NgSelectConfig`, `P2SVpnConnectionRequest`, `TemplateProps`, `ListTagsForResourceCommandInput`, `ControlBase`, `PluginMetrics`, `BindingWrapper`, `ITargetInfoProps`, `IORedis.Redis`, `ITaskLibrary`, `PuzzleState`, `CarService`, `PlanetPortalApplication`, `GitHubRepoData`, `vscode.WorkspaceConfiguration`, `VaultEntry`, `LiveEventSession`, `YieldNode`, `Defer`, `VisDefaultEditor`, `GUILocationProperties`, `ArrayFunc`, `Type_Interface`, `StepAdjustment`, `Nerve`, `AxisMilestone`, `FieldsSelection`, `ICompetitionDefault`, `ViewController`, `PaperProps`, `CstNode`, `SubsetConstraints`, `ApiTableData`, `TooltipContextValue`, `UseQueryResult`, `AggregateRowModel`, `StateT1`, `DFAState`, `NavigationItem`, `JsonDocs`, `CustomFilterArgs`, `Skin`, `PositionStrategy`, `IRenderData`, `CommentRequest`, `PluginExtended`, `StackLayout`, `ClientDTO`, `IAssetProvider`, `GraphQLServiceContext`, `py.Expr`, `DescribeRegistryCommandInput`, `StyledTextProps`, `IProjectsRepository`, `DescribeJobCommandInput`, `Pie`, `ISnapshot`, `Tuple`, `ApiV2Client`, `OriginGroup`, `RoomParticipantIdentity`, `ServiceDiscoveryPlugin`, `Webhooks`, `Tensor5D`, `BIP85`, `NgxUploadLogger`, `PartialResults`, `DAL.DEVICE_ID_SYSTEM_MICROPHONE`, `SVGLineElement`, `NgrxJsonApiStoreData`, `ConflictMap`, `AssetProps`, `IActivitiesGetByContactState`, `RegisterInput`, `Workunit`, `StaticArray`, `ActionObservations`, `InterventionTip`, `StaticSiteCustomDomainRequestPropertiesARMResource`, `StdlibRegistry`, `TinaFieldEnriched`, `ListRenderItem`, `AZSymbolInformation`, `IOriginConfiguration`, `SearchQuery`, `OpenSearchError`, `CreateTagsCommandInput`, `InternalMetric`, `MenuStateReturn`, `Dock`, `GunMsg`, `StateKey`, `PublishParams`, `RangeDelta`, `IamRoleName`, `ViewGroup`, `ResolverRelation`, `MarkdownNode`, `ViewModel`, `CreateDataAssetDetails`, `AwsShapes`, `FundedAward`, `ImageType`, `V1Pod`, `PackageDefinition`, `ModuleType`, `MonthViewProps`, `TreeItemCollapsibleState`, `CliCommandOptions`, `OperationHandlerPayload`, `BuildSourceGraph`, `SequentDescriptor`, `IUserDTO`, `DropdownMenuItemLinkProps`, `ScannedPolymerElement`, `GsTeam`, `ScreenMatrixPixel`, `Analyzer`, `HttpClientRequest`, `MediaFile`, `RowLayoutProps`, `requests.ListAutonomousDatabaseBackupsRequest`, `ComponentHost`, `ContentInfo`, `OnReferenceInvalidated`, `Errors`, `ChromeExtensionManifest`, `PubKeyType`, `GraphQLInputObjectType`, `ClassElement`, `T19`, `EventHint`, `MfaOption`, `ClusterCreateSettings`, `FieldsetContextProps`, `LayoutState`, `HistoryInteractionEvent`, `Themer`, `PresetType`, `requests.ListCachingRulesRequest`, `ManyApiResponse`, `Notebook`, `ConstructorOptions`, `$ResponseExtend`, `SelectFileModel`, `TsOptionComponent`, `ProcessService`, `DocService`, `IMovable`, `UserDoc`, `ClickHandler`, `Ball`, `Crosshair`, `CandleGranularity`, `LocalizedSteps`, `FlashbotsBundleProvider`, `OasOperation`, `RejectInvitationCommandInput`, `MethodHandle`, `AbiFunction`, `ProxyInfo`, `MaxAttrs`, `RuleDescriptor`, `OmitsNullSerializesEmptyStringCommandInput`, `PropOfRaw`, `MaterialOptions`, `K.BlockStatementKind`, `LinkResolverResponse`, `NodeJS.ErrnoException`, `QuicStream`, `CreateMembersCommandInput`, `FunctionAppEditMode`, `ClassDetails`, `AddressBookService`, `QueryParser.QueryNode`, `RSPSharedOutput`, `KintoClient`, `CanvasBreakpoints`, `MapMeshStandardMaterial`, `DatabaseContainer`, `TestRenderNode`, `ParamValues`, `ChainTokenList`, `Changes`, `Spell`, `AbiParameter`, `SavedKeymapData`, `FunctionSetting`, `FileExtension`, `BuildPackage`, `MutationSubState`, `ConditionExpressionDefinitionChain`, `firebase.app.App`, `CatalogLayoutState`, `IOidcIdentity`, `WalletConfig`, `Got`, `IFormContext`, `d.ComponentCompilerStaticMethod`, `AABBOverlapResult`, `Light`, `OptionProps`, `HandleError`, `PhotoDataStructure`, `UpdateAppInstanceCommandInput`, `CommitOrDiscard`, `IPaintEvent`, `GRULayerArgs`, `Hero`, `ZoweDatasetNode`, `SankeyGraph`, `EditWidgetDto`, `ServiceStatus`, `WithdrawAppState`, `Denque`, `Knex.ColumnBuilder`, `EditorAction`, `CloudPoint`, `IQueryInfo`, `AuthenticateGoogleRequest`, `HsLaymanService`, `TextTrackCue`, `V1StepModel`, `PaginatedRequestOptions`, `WatchOptions`, `AzureFirewall`, `DiscogsTrack`, `IndexerManagementClient`, `IOnSketchPreviews`, `RenameInfo`, `ApiToken`, `XMLAttribute`, `LoopBounds`, `RawDoc`, `MultiTrie`, `Relayer`, `LemonTableColumns`, `ColorStyle`, `ParsedAccount`, `ICollectItem`, `SyntaxError`, `InputParamMapper`, `ICfnFunctionContext`, `DropTarget`, `ForwardingParams`, `PluginStorageKind`, `CreateIndexBuilder`, `HubUtility`, `IFrontendDomChangeEvent`, `TestDirectEscrow`, `ICoreMouseEvent`, `ListAvailabilityDomainsResponse`, `StylesMap`, `SimControlLog`, `HdmiInput`, `Staking`, `HumanData`, `DocumentSymbolProvider`, `Events.preupdate`, `AggsCommonStartDependencies`, `LedgerTransport`, `t.SourceLocation`, `OpenDialogReturnValue`, `NodeAttributes`, `IComponentWithRoute`, `Attachment`, `GetDedicatedIpsCommandInput`, `StatementNode`, `SysUser`, `InputActionMeta`, `AdamaxOptimizer`, `GfxInputState`, `Path2`, `ILineGeometry`, `TT.Level`, `BasicSourceMapConsumer`, `TableNS.RowProps`, `requests.ListAutoScalingConfigurationsRequest`, `QuickPick`, `DietForm`, `F3DEX_Program`, `StoreGetter`, `IStepAdjustmentView`, `BaseShrinkwrapFile`, `IApplicationShell`, `TLang`, `UnionMemberMatchTransformer`, `PluginTransformResults`, `picgo`, `IdentityDictionary`, `numVector`, `Constant`, `Query`, `ServerStatus`, `pxt.Map`, `DropData`, `TallySettingsIni`, `Journey`, `CharacterCreationPage`, `GetDomainNamesCommandInput`, `SpatialCategory`, `GetAdministratorAccountCommandInput`, `FirestoreUserField`, `IBaseComponent`, `SQLParserVisitor`, `MIREntityTypeDecl`, `Structure`, `SaveFileArgs`, `ABN`, `StringLocation`, `ValidationParams`, `UsersService`, `LinkService`, `HTMLTableHeaderCellElement`, `Archiver`, `IStorageSchema`, `MouseService`, `Order2Bezier`, `Packages`, `ContentLinesArrayLike`, `EpicTestMocks`, `JSDefinitionNode`, `InsertQueryBuilder`, `RadSideDrawer`, `LatLon`, `TAccesorData`, `ReadonlyDeep`, `StoreItem`, `ServerRoute`, `AppxEngineActionContext`, `TransactionQueryPayload`, `ImageTexture`, `SubmissionObject`, `IAuthorizer`, `Widgets`, `BudgetGroup`, `MultilevelNode`, `DesignerVariable`, `requests.ListPublicationsRequest`, `KeyFrame`, `ControlDirective`, `Accessor`, `ZWaveController`, `MigrationService`, `Loading`, `RequestPolicyOptionsLike`, `SwapEncoder`, `Papa`, `NoteStateWithRoot`, `DIContainer`, `TestGroup`, `CardDatabase`, `EmittedObject`, `TMetricAggConfig`, `VideoConverter`, `PointerButton`, `IUserProfile`, `AsyncModuleConfig`, `IDebugProvider`, `ElTreeModelData`, `ActivityTimeSeriesMetrics`, `StartPoint`, `DetectedFeatures`, `GameTreeNode`, `AuthOptions`, `Budget`, `ISummaryHandle`, `Guide`, `BackendTimingInfo`, `PostResult`, `DaffCartItemInput`, `IMrepoConfigFile`, `CourseDuration`, `MavenTarget`, `IWhitelistUser`, `KeyboardEventToPrevent`, `RateLimitOptions`, `AreaPointItem`, `AccountResource`, `UntagResourceCommandInput`, `pulumi.ResourceOptions`, `ShoutySession`, `ListBackupsRequest`, `Big`, `d.HostElement`, `CdsInternalPopup`, `MccScrollspyGroup`, `EnrichedPostageBatch`, `BasicLayoutProps`, `UseQueryStringProps`, `GridGraph`, `TopNavigationEntry`, `Vnode`, `InsertContentDOM`, `CompletionOptions`, `DeleteGroupRequest`, `PushToServiceResponse`, `requests.ListIncidentResourceTypesRequest`, `PieceSet`, `ConverterFunction`, `ImageDefinition`, `CompositeContentBuilder`, `Migrate`, `ValidationSchema`, `RatingProps`, `NestedDict`, `LinksList`, `UserId`, `ComboBoxGroupedOptions`, `CommandRegistry`, `PrimaryKeyOptions`, `ElkNode`, `JSONTree`, `StreamResetResponseParam`, `UseMutationReducerAction`, `SchemaTypes`, `DrawingNode`, `SpeedDialItem`, `ELULayerArgs`, `QueryOrderMap`, `CoordinateConverter`, `MissingFilter`, `Reducer`, `RippleRef`, `CreateTaskCommandInput`, `IFunctionParameter`, `CurveVector`, `IDisposable`, `TileMatrixType`, `MyPromise`, `DeleteResourcePolicyCommandInput`, `TTarget`, `EdgePlaceholder`, `express.Response`, `DbTokenMetadataQueueEntry`, `Jsonp`, `MVideoThumbnail`, `JSDOM`, `ApplicationConfigService`, `Restangular`, `ListTagsForResourceResponse`, `FleetRequestHandler`, `ExtendedMesh`, `ChangeEvent`, `ReportParameter`, `UseCaseBinder`, `DeadLetterConfig`, `Vector3_`, `Cross`, `PolylinePoint`, `ActionExecutionContext`, `ProjectsService`, `ClarityAbiType`, `FieldDoc`, `PsbtTxOutput`, `OwnerItemT`, `RangeError`, `DataPromise`, `GhcModCmdOpts`, `ts.ReturnStatement`, `ProtoPos`, `CsvInputOptionsNode`, `ITokensState`, `Seed`, `InputStyleProps`, `ChartDataset`, `VerifiableClaim`, `TSetting`, `UpdateApplicationResponse`, `OneToManyOptions`, `FabricGatewayRegistryEntry`, `MessageOption`, `SessionProposal`, `HookConfig`, `SubmissionQueueItem`, `LS.CancellationToken`, `WebGLVertexArrayObject`, `ReAtom`, `SharePlugin`, `PendingUpdateDetails`, `ApplicationConfigState`, `d.OutputTargetDocsJson`, `Panel`, `Visitors`, `JieQi`, `MonzoAccountResponse`, `DirectoryEntry`, `SyntaxType`, `CreateEntrypoint`, `IndexFileInfo`, `ServicesState`, `FnReturnType`, `StynTree`, `AnalyicsReporterConfig`, `QWidget`, `MlCommonUI`, `PlaneByOriginAndVectors4d`, `ExpressLikeStore`, `Flicking`, `UserManagerInstance`, `GitHubIssueOrPullRequest`, `GethInstanceConfig`, `DashboardContainerOptions`, `StreamID`, `TimelineRecord`, `TextDocumentRegistrationOptions`, `GeoService`, `Book`, `WalletManager`, `CoreTracer`, `alt.RGBA`, `UIImage`, `AnyRouter`, `WatcherFactory`, `ProcessResult`, `PluginModule`, `common.ConfigFileAuthenticationDetailsProvider`, `UploadOptions`, `LocalizationProviderProps`, `address`, `HTMLTableCellElement`, `MenuProps`, `ITwin`, `GtConfigSetting`, `FormikActions`, `code.Uri`, `PluginManager`, `UserReport`, `OptimizationPass`, `MActorId`, `ITimelionFunction`, `requests.GetJobLogsRequest`, `AutoImportResultMap`, `CardContentProps`, `IFirmwareCodePlace`, `CreateGrantCommandInput`, `Lyric`, `DaffCategoryFilterEqual`, `IconButtonProps`, `apid.BroadcastingScheduleOption`, `TelemetryData`, `XYZStringValues`, `AcornNode`, `IFormItem`, `MediaRec`, `ReactClient`, `AnimationConfig`, `TestConfigOperator`, `CellType`, `ProtocolParams.Propose`, `ParsedJob`, `int32`, `SoFetch`, `MigrationStatus`, `NavController`, `SubscriptionCategoryNotFoundFault`, `WithAttributes`, `ClaimStrategy`, `Recorder`, `OctreeNode`, `SchemaUnion`, `MapsManagerService`, `WorkDoneProgressServerReporter`, `MimeType_`, `IUserRegistrationInput`, `MouseOrTouch`, `InstalledDetails`, `requests.ListTaskRunLogsRequest`, `CreateImageCommandInput`, `SqrlCompiledOutput`, `ts.ForStatement`, `UseMutationState`, `BitBucketServerAPI`, `AstWalker`, `UpdateNoteRequest`, `ImageBox`, `FlowLog`, `SubscriptionOptions`, `GetResourcePoliciesCommandInput`, `ThyAutocompleteContainerComponent`, `ActionImportRequestBuilder`, `SankeySeries.ColumnArray`, `BoolQuery`, `LoggerTimeSpan`, `CreateArgs`, `MdlPopoverComponent`, `TokenDetailsWithBalance`, `Function2`, `USSEntry`, `AWS.DynamoDB`, `ConfigLoader`, `Weight`, `MatchmakerRemove`, `TestSetup`, `UpdateFilterCommandInput`, `JoinNode`, `TeamsState`, `PolicySummary`, `ResourceKey`, `TaroNode`, `MDCMenuFoundation`, `BatchValidator`, `serialization.SerializableConstructor`, `MutationRecord`, `When`, `OpenBladeInfo`, `PromiseRequest`, `MetricDataPoint`, `SpaceType`, `PaletteRegistry`, `IAccountProperties`, `IntCV`, `LibraryNotificationAction`, `UI5XMLViewCompletion`, `OpenApiDocumentRefs`, `PageAPIs`, `Guard`, `ObjectStore`, `ResolveCallback`, `PropertyCategoryRendererManager`, `core.IThrottler`, `UITextView`, `IEmail`, `IFilterItemProps`, `BoxCollisionShape`, `ts.ImportDeclaration`, `HDNode`, `ImageVideo`, `PathEdge`, `UpdateOrganizationConfigurationCommandInput`, `ListCardContent`, `NZBAddOptions`, `LeafNode`, `ClarityType`, `PutResourcePolicyCommandInput`, `GroupedFields`, `Conditional`, `IStorageScheme.IStorage`, `ProjectGraph`, `AppEventsState`, `IGeometryAccessor`, `IMdcChipElement`, `AssociationGroupInfoCCInfoGet`, `ISets`, `GraphQLFormattedError`, `DocSourceFile`, `IZipEntry`, `ESLScreenBreakpoint`, `GraphicUpdateResult`, `FirebaseUser`, `ICharacteristic`, `DocumentError`, `HTMLFormElement`, `Block`, `AstEditorProps`, `MeterCCReset`, `ComponentStory`, `GroupItem`, `TaggingInfo`, `CoreSetup`, `UAProxyManager`, `server.Diagnostic`, `LiveEventMessagingService`, `KeyboardEventHandler`, `ParquetData`, `THREE.Object3D`, `TableColumnWidthInfo`, `IMusicDifficultyInfo`, `Applicative2C`, `IImageConstructor`, `CaptionSelector`, `AxisTick`, `Event`, `ClusterMetadata`, `CardBrand`, `FileTrackerItem`, `IGraphDef`, `GraphNode`, `ISessionContext`, `findUp.Options`, `ComponentResult`, `Discord.GuildMember`, `SnackbarState`, `DeclarationMapper`, `ChangesetFileProps`, `ComponentTag`, `PersonFacade`, `ListObjectsRequest`, `Tsoa.Method`, `TTypescript.ParsedCommandLine`, `PutPermissionPolicyCommandInput`, `FieldConfiguration`, `WolfState`, `TestResponse`, `PlayerPageSimulation3D`, `UAMethod`, `JsonSchemaDataType`, `Mock`, `ScmDomain`, `SignatureProviderRequestEnvelope`, `DrawContext`, `DecodeData`, `Electron.OpenDialogReturnValue`, `AssetChangedEvent`, `PostFilter`, `WebResourceLike`, `Electron.IpcMainEvent`, `tensorflow.ISignatureDef`, `VNodeTypes`, `FilterMetadata`, `React.MouseEventHandler`, `AmountOptions`, `MessengerTypes.Attachment`, `ContentData`, `NumericLiteral`, `PubSubEngine`, `Membership`, `UpdateCheckResult`, `CategoryData`, `TokenResponse`, `TableOperationColumn`, `social.ClientState`, `ReadableSpan`, `LibraryContextSeries`, `SupportedFormat`, `GlobalConfig`, `TagWithRelations`, `d.CssImportData`, `UseReceiveSet`, `DividerProps`, `TaskWithMeta`, `ColumnDefs`, `GameObj`, `FormatterFn`, `BlockMapType`, `EntityCollections`, `DaffAuthTokenFactory`, `MDCSliderAdapter`, `lsp.Location`, `PackagesConfig`, `TestFunctionImportEntityReturnTypeCollectionParameters`, `FirebaseSubmission`, `ValidationResult`, `ServerProvider`, `DescribeReservationCommandInput`, `SingleWritableState`, `TabStrip`, `LOG_LEVEL`, `Events.pointerdown`, `ResolvedAliasInfo`, `Tap`, `ZobjPiece`, `VariableDefinitionNode`, `StageInfo`, `requests.ListWindowsUpdatesInstalledOnManagedInstanceRequest`, `GetExtensions`, `BungieService`, `Ethereum.Network`, `TextStyle`, `cachedStore.Container`, `HighlighterProps`, `BinaryInfo`, `Miner`, `sodium.KeyPair`, `BufferedChangeset`, `poolpair.PoolSwapMetadata`, `MigrationBuilder`, `NotificationTime`, `WritableComputedRef`, `NonNullTypeNode`, `RegLogger`, `LocalStorageSources`, `BitcoinBalanceMonitorConfig`, `UpdateDatabaseCommandInput`, `PuppetBridge`, `ImportEditor`, `DragDropService`, `EditorActionsManager`, `TCity`, `Terrain`, `DataStartDependencies`, `ModuleBody`, `IInstance`, `ListExperimentTemplatesCommandInput`, `ITestWizardContext`, `AsyncValidatorFn`, `ChainGunLink`, `ComplexNestedErrorData`, `Drive`, `UNIST.Node`, `interfaces.ServiceIdentifier`, `books.Table`, `ValidationEngine`, `Ban`, `ts.ParameterDeclaration`, `AST.Module`, `CreateDomainRequest`, `NodeVisitor`, `ThemeColorState`, `ErrorData`, `ExadataInfrastructureContact`, `vscode.MessageItem`, `GetRotation`, `VendorType`, `Globals`, `CustomSpriteProps`, `IMatrixCell`, `IdentifierObject`, `FunctionFallback`, `ActivationLayer`, `PlayingState`, `DebugProtocol.StepOutArguments`, `AliasMap`, `ObservableLanguage`, `CSSRule`, `FieldErrors`, `ParsedRoute`, `PutResourcePolicyCommand`, `ContainerForTest`, `RequestResult`, `SelectDropdownOption`, `JsonResponse`, `V1Namespace`, `MethodMap`, `DynamoDB.DocumentClient`, `MlContextValue`, `GraphConfiguration`, `IUserSubscription`, `EthTxType`, `DaffCartPaymentFactory`, `IgnoreQueryParamsInResponseCommandInput`, `Shuriken`, `ScopedClusterClientMock`, `Highcharts.PolarSeries`, `IRNG`, `Atomico`, `Kafka`, `PlansState`, `LoginResult`, `ViewCell`, `VerticalAlignments`, `IRuleCheck`, `TextDocumentChangeEvent`, `DefaultInputState`, `LedgerWalletProvider`, `ARCamera`, `PDFField`, `ActionSheetProps`, `MangleOptions`, `WebdriverIO.Browser`, `PureIdentifierContext`, `IGroupItem`, `OrganizationInterface`, `VoiceServerUpdate`, `SettingsService`, `QueryHints`, `DefaultState`, `MemFS`, `ResolvedEntitySchema`, `IOObjectSet`, `Research`, `Semigroup`, `MacroInfo`, `UnderscoreEscapedMap`, `MeshVertex`, `okhttp3.WebSocket`, `LoadEvents`, `SignedToken`, `MockWebSocket`, `IAnimationState`, `UpdateSecurityProfileCommandInput`, `ConfigurationOptions`, `ListSecretVersionsRequest`, `Ppu`, `UpdateContactCommandInput`, `VarAD`, `AppABIEncodings`, `CloseChannelParams`, `TranslatePipe`, `CdkDropList`, `q.Tree`, `GraphQLSubscriptionConfig`, `IOContext`, `UseSubscription`, `SNode`, `Observation`, `ParamConfig`, `types.IAzureQuickPickOptions`, `MyNFT`, `CompletedGatewayOptions`, `ContactsProvider`, `functions.storage.ObjectMetadata`, `DateOrString`, `AbstractMethod`, `Response`, `ComponentCommentIterator`, `DocumentRegistry.IContext`, `StartPlugin`, `River`, `FoamTags`, `TextType`, `EElementSignature`, `IAnimationOptions`, `Heading`, `JobNumbered`, `IMQRPCRequest`, `ChangeTheme`, `UIFill`, `MagentoOrder`, `EditableContent`, `MetaValue`, `clientSocket`, `MatchedStory`, `FileSystemEntryKind`, `Scroll`, `IKeyboardInput`, `PreprocessorGroup`, `NewableFunction`, `IGherkinLine`, `UpSetQueries`, `IElementColors`, `MdcChipSet`, `Unsub`, `JobRunLifecycleState`, `CBCharacteristic`, `IQueryProps`, `DefaultTextStyle`, `CancelTokenStatic`, `ExpShapeConcat`, `ErrorObject`, `ng.IHttpService`, `VariableDefinition`, `SelectDownshiftRenderProps`, `RelationshipProps`, `ContentState`, `ScannedProperty`, `ListTableRowsCommandInput`, `MessageCollector`, `FilterEntity`, `RendererNode`, `CrudRequest`, `ResolverData`, `ViewContainerTitleOptions`, `CommentItem`, `ContainerSample`, `StreamMetricReport`, `TransactionModel`, `ISiteScriptActionUIWrapper`, `OpaqueToken`, `CertificateSummaryBuilder`, `Files`, `PlanInfo`, `TabularDataset`, `nanoid`, `CallbackObject`, `TraitLocation`, `Element`, `Apps`, `CalculatedTreeNode`, `AccountService`, `PluginStrategy`, `SavedObjectConfig`, `JsonPointerTokens`, `MailTo`, `MarkerInfo`, `BaseReasonConfig`, `FMOscillator`, `GDQOmnibarBidwarOptionElement`, `OffsetIndexItem`, `DragDropConfig`, `NumericRange`, `Sanitizer`, `EventStream`, `NzTreeBaseService`, `API.services.IXulService`, `AcceptTokenRequest`, `Currency`, `requests.ListVolumeBackupPoliciesRequest`, `GradientBlock`, `Splice`, `HTMLIonRouterElement`, `SetStateFunc`, `ContentFolder`, `BinaryDownloadConfiguration`, `IStatRow`, `Critter`, `ApplicationVersion`, `CLM.CLChannelData`, `FilenameFilter`, `ChangeBundle`, `FlagshipTypes.AndroidConfig`, `TextColor`, `StringExpression`, `AxiosResponseGraphQL`, `ExcludedConditions`, `ListMatchesRequest`, `iField`, `ODataSegment`, `PSIReal`, `BoardView`, `ThreadItem`, `ArDrive`, `EventFnBefore`, `firestore.DocumentReference`, `CTR`, `XmlParser`, `FabAction`, `ToneMapping`, `MkFuncHook`, `Memo`, `tf.io.ModelJSON`, `StringKeyValuePair`, `UpSetQuery`, `ActionName`, `GeneralConfig`, `RecurringDepositsService`, `requests.ListServicesRequest`, `GradientBlockColorStep`, `IExecutorHandlersCollection`, `ChildAppRequestConfig`, `ITreeDataNode`, `CreatePolicyCommandInput`, `NormalizedFile`, `TabsConfig`, `NativeComputationManager`, `B9`, `CompanionData`, `$IntentionalAny`, `BlockTransactionString`, `files.FullLink`, `TagData`, `FieldFormatInstanceType`, `ISdkStreamDescriptor`, `OnSuccessCallback`, `ZWaveNode`, `AxeResults`, `StructureController`, `IServiceConfiguration`, `LinearGradientPoint`, `FileContent`, `RecordsRefs`, `uElement`, `IteratorResult`, `DocumentsExtImpl`, `BigQueryRetrieval`, `Weighter`, `RenderButtonsArgs`, `SettingService`, `Expense`, `TreeEnvironmentContextProps`, `VirtualNetwork`, `decimal.Decimal`, `PartyRemove`, `ReleaseType`, `RecordSource`, `OnCacheChangeEventFn`, `DisplayLabelsRequestOptions`, `OmvFeatureFilterDescriptionBuilder.MultiFeatureOption`, `IModifierRange`, `Fallback`, `ServerState`, `requests.ListApmDomainWorkRequestsRequest`, `CommandExecutorInterface`, `Relative`, `CharRenderInfo`, `IMergeTreeDeltaCallbackArgs`, `EventSearchRequest`, `JsonParserGlobalContext`, `Mocha.SuiteFunction`, `DescribeStreamCommandInput`, `PageIndex`, `Hit`, `CellItem`, `VariantCfg`, `F1`, `CellEventArgs`, `DeleteIntegrationResponseCommandInput`, `CompilerEventBuildFinish`, `a.Module`, `ThyGuiderConfig`, `IAuthenticationManager`, `StatusIcon`, `OutputChunk`, `LoggerWithTarget`, `BehaviorSubject`, `T9groups`, `ModalDialogOptions`, `TreeMapNode`, `ImageModel`, `MovieState`, `RTCRtpEncodingParameters`, `L2`, `RegClass`, `JSXElementConstructor`, `FuncType`, `CustomSettings`, `HttpResponseCodeCommandInput`, `PerpMarket`, `CreateConfigurationSetCommandInput`, `ListModel`, `CacheState`, `NavItem`, `LedgerService`, `Animated.Value`, `LegendStateProps`, `ResponseType`, `ParsedGeneratorOptions`, `PresentationPropertyDataProvider`, `GCM`, `DescribeDBParameterGroupsCommandInput`, `FileDataMap`, `UserPreKeyBundleMap`, `ElementHarness`, `Flashcard`, `IPodcast`, `UpdateUser`, `ReferenceExpression`, `Computer`, `Signer`, `ExecOptions`, `GroupWithChildren`, `UpdateJobRequest`, `DecodedInformation`, `VMLRenderer`, `SettingNames`, `ESLMedia`, `BaseRange`, `IForm`, `ValueNode`, `ts.UnionTypeNode`, `PriceRangeImpl`, `ObjectSetting`, `Warehouse`, `OutputTargetAngular`, `RequestArugments`, `CF.Get`, `TestableApiController`, `OAuthRedirectConfiguration`, `SafetyDepositDraft`, `ContactResponse`, `IAudioMetadata`, `LoadResult`, `TreeData`, `ResizeObserverEntry`, `InvokeDecl`, `ng.ITimeoutService`, `ParameterizedValue`, `ValidationQueue`, `DatetimeParts`, `LinkReduxLRSType`, `RemoteConfig`, `EdgeRouting`, `IndentNode`, `IToastProps`, `OutputTargetHydrate`, `HttpBackend`, `Mixer`, `RenderBatch`, `ts.CompilerOptions`, `CdtSite`, `CoreTypes.LengthType`, `EnhancedTransaction`, `HighRollerAppState`, `Todo_viewer`, `RoleModel`, `Simulator`, `BuildEvents`, `SpecialBreakTypes`, `TextTrack`, `Face3`, `DMMF.InputType`, `MIRPCode`, `GifFrame`, `CalendarDate`, `App.services.IWindowService`, `TagItem`, `Positioner`, `UserFields`, `CLIEngine`, `RetryStrategyOptions`, `EveesConfig`, `TodoModel`, `Gesture`, `ResOrMessage`, `ResourceCount`, `TextureInfo`, `MediaType`, `PropertyFactory`, `ClippedVertex`, `LiteralCompiler`, `requests.ListBudgetsRequest`, `RadixAtom`, `Tuplet`, `DeserializeEvent`, `AnimatedInterpolation`, `ContentObject`, `ListsPluginRouter`, `CommonStyle`, `CSSShadow`, `forge.pki.Certificate`, `IPropertyTemplate`, `IW`, `IStarterDependency`, `FilterInput`, `K`, `Initial`, `SkeletonProps`, `Rule.RuleMetaData`, `B64EncryptionResult`, `DeclaredElement`, `SensorElement`, `PartialParam`, `Pulse`, `UseRefetchState`, `Ops`, `ThyTreeNodeData`, `MatchData`, `TodoItemFlatNode`, `IProjectItem`, `WalletRecord`, `HTMLIonOverlayElement`, `JSONTopLevel`, `MigrationTypes`, `ISortCriteria`, `ChanLayer`, `TreeNodeValue`, `GraphTxnOutput`, `TaskManagerSetupContract`, `NodePosition`, `INamedDefinition`, `ServiceProvider`, `LobbyController`, `ReadonlyTuple`, `PaginationServiceStub`, `IGridColumnFilter`, `CopySnapshotCommandInput`, `AnimeNode`, `APIGateway`, `MergeTreeChunkV1`, `LroImpl`, `d.OutputTargetDocsReadme`, `SubmittableExtrinsic`, `DismissedError`, `GraphNodeID`, `MultisigItem`, `ClientSessionEntryIded`, `ObservableConverter`, `DeleteChannelModeratorCommandInput`, `RebaseResult`, `RootScreenDelegate`, `peerconnection.PeerConnection`, `MySQL`, `ImmutableAssignment`, `Connector`, `AppEpic`, `ObjectIdentifier`, `CustomConfigurationProvider`, `OutgoingStateType`, `MetricsModel`, `AuditedAttributes`, `PaperAuthor`, `TournamentList`, `ValidationRules`, `LiveSelector`, `SmallMultiplesGroupBy`, `BlobCreateSnapshotResponse`, `TestFunctionImportEntityReturnTypeParameters`, `NodeObject`, `ConsumeMessage`, `AuthRouteHandlerOptions`, `RadioOption`, `IPluginsContext`, `PrimitiveBundle`, `FoundationElementDefinition`, `IEventData`, `VisEditorOptionsProps`, `StructureRampart`, `ServerIdentity`, `IconifyElement`, `SequencePatternInfo`, `GitContributor`, `UsePreparedQueryOptions`, `settings.Settings`, `RatingPair`, `FieldTemplateProps`, `FieldArrayRenderProps`, `mod.LoadBalancerTarget`, `PopupInfo`, `LogType`, `DocumentSymbolParams`, `Metadata_Item`, `STColumnFilter`, `KVNamespace`, `DataRecognizer`, `FullConfig`, `TypedDataDomain`, `EntryList`, `ITokenService`, `EthereumCommon`, `LineSegments`, `BaseConverter`, `ToastItem`, `NameObjExecuteInfo`, `ESTree.MemberExpression`, `DomainSummary`, `DeleteGlobalClusterCommandInput`, `ProgressBarProps`, `CacheNotifyResult`, `GridLineStyle`, `InternalInstanceState`, `L2Item`, `ReadModelReducerState`, `IContent`, `AccountRepository`, `d.EmulateConfig`, `ExecutionScopeNode`, `IQueryParameter`, `IFieldOption`, `MenuController`, `Wine`, `Bindable`, `PhysicalObject`, `RawOptions`, `ColorPresentationParams`, `IEnvironmentRead`, `Identity`, `PolicyStatement`, `scriptfiles.ASScope`, `TimePeriod`, `SortConfig`, `MemoryStorage`, `AutocompleteProps`, `BuildTarget`, `Heightfield`, `UserMetadataStore`, `Bals`, `ParserInput`, `DaffCategoryFilterToggleRequestEqualFactory`, `YamlMap`, `SchemaEnv`, `RefactorContext`, `DetectionResult`, `IClusterClient`, `TypeAttributes`, `BaseDbFieldParams`, `Angle`, `TypeParameterReflection`, `MooaApp`, `fabric.IObjectOptions`, `ComponentFactory`, `ReplayTabState`, `RE`, `CustomLink`, `FieldArgs`, `PermissionContext`, `ColumnComponent`, `WindowRef`, `UnitsMap`, `StepOption`, `Renderer3`, `TypeApp`, `LogAnalyticsMetric`, `ICollectParms`, `OnboardingOpType`, `IntervalScheduler`, `AdvancedDynamicTexture`, `DataLoader`, `MonitorCollection`, `SafeElementForMouse`, `ResizeInfo`, `BiKleisli`, `Plugins`, `IAsset`, `firestore.QueryDocumentSnapshot`, `AudioVideoController`, `NavNode`, `ModalOptions`, `RuleContext`, `InsightType`, `DescribeEventsRequest`, `CommandExecutor`, `AccountID`, `CreateCertificateResponse`, `CreateClusterRequest`, `CellInput`, `ObservableHash`, `AsyncProcessingQueue`, `WStatement`, `GSMemoryMap`, `RSS3Index`, `SplitCalculator`, `ExchangePriceRepository`, `IPublicKey`, `Fn2`, `ManifestLoader`, `ILinkWithPos`, `ParsedFile`, `SiteStatus`, `PopoverStateReturn`, `listOptions`, `TraverseFunction`, `PageBlockRule`, `ContractDeployer`, `ToasterService`, `ethers.ethers.EventFilter`, `TypeContext`, `InMemoryFileSystemHost`, `FragLoaderContext`, `IEntry`, `DropletInfo`, `FactEnvelope`, `DotenvConfigOutput`, `SparseMerkleTreeImpl`, `Table2`, `ThreadData`, `IExecuteCommandCallback`, `AxisCoordinateObject`, `GetWorkRequestResponse`, `LayoutFacade`, `InterceptorManager`, `response`, `TokenFactory`, `ERC20Value`, `TrackGroup`, `TypeTemplates`, `TaskPool`, `CliCommandExecution`, `FunctionExpression`, `CkElement`, `IAtomHeader`, `SerializedConsoleImpl`, `FunctionDocumentation`, `EventType`, `capnp.List`, `ErrorBoundaryState`, `DummyTokenContract`, `MemoString`, `IPlan`, `TDataProvider`, `LocalProxiedEntry`, `ModuleMap`, `IStorages`, `AlertId`, `Types.RequestParameters`, `PerSignalDetails`, `TestRun`, `PassImpl`, `HubConnection`, `IBucket`, `DummyResolver`, `ISceneConfiguration`, `MVideoAccountLight`, `DashboardContainerInput`, `RematchStore`, `Align`, `P2PResponsePacketBufferData`, `ObservableFromObject`, `AddSchema`, `CRC16CCCommandEncapsulation`, `TransientGraphUtility`, `Experiment`, `RenderProps`, `QueryPrecedenceCommandInput`, `DebugProtocol.Response`, `GlobalsService`, `ShoppingCartService`, `MapboxGL.Map`, `ICandidateInterviewersCreateInput`, `StatusType`, `Preimage`, `CommunicationIdentifier`, `Fixtures`, `HTMLIonAccordionElement`, `CoreType`, `RequestHandlerParams`, `Span_Event`, `ButtonSize`, `UriLocation`, `SegmentHandler`, `AbbreviationMap`, `IDataPoint`, `RawProcess`, `ObjectPredicate`, `SankeyNode`, `SkipOptions`, `IocContext`, `requests.ListAccessRulesRequest`, `requests.ListAddressListsRequest`, `TooltipPosition`, `ExchangePriceRecord`, `TasksPluginReminderModel`, `FundedAwardModel`, `ResponsiveState`, `LmdbDbi`, `ConfigurationCCSet`, `ts.WhileStatement`, `GetRoomCommandInput`, `Orders`, `ArticleService`, `NodesRef`, `Pokemon`, `SpatialImageEnt`, `CustomersState`, `ZipIterator`, `ParameterInjectInfoType`, `OutlineManualServerEntry`, `React.ErrorInfo`, `ValueSuggestionsGetFnArgs`, `InstallerMachineContext`, `NotNeededPackage`, `RedBlackTree`, `OgmaPrintOptions`, `MemberInfo`, `SetupParams`, `ValueOrPromise`, `IPointPosition`, `PuppetRoomJoinEvent`, `VNodeWithAttachData`, `TooltipOperatorOptions`, `MarkdownView`, `WebStandardsDashboard`, `Collection`, `UserDataStore`, `EnvironmentVariables`, `PropertyFilter`, `CmbData`, `ConsoleSidebarLink`, `StatsCompilation`, `TransactionData`, `ManagedHsm`, `IGceDisk`, `WorkspaceDefinition`, `VisibilityFilter`, `TransactionEndedPayload`, `TAccessor`, `Material_t`, `QueryableFieldSummary`, `Enzyme.ShallowWrapper`, `CameraControllerClass`, `ZoneAndLayer`, `IAuthentication`, `R`, `WithGenerics`, `DynamoDbWrapper`, `MUST_CALL_AND_RETURN_SUPER_METHOD`, `UserVariableContext`, `ISharedDirectory`, `ServerDataImportStore`, `AsyncIterableIterator`, `TransientStore`, `RefreshTokenEntity`, `ContributorService`, `IChoice`, `SinonSandbox`, `ButtonColors`, `$RequestExtend`, `ClientInfo`, `AwsState`, `FunctionCallValue`, `GoToLabelProps`, `DeleteWriteOpResultObject`, `AssignmentNode`, `D2`, `Attribute.JSON`, `DxModelContstructor`, `RawDraftContentBlock`, `ContextMenuAccess`, `Paginate`, `PathMatcher`, `_ITable`, `p5ex.ShapeColor`, `ValueEdge`, `ApiTypes.UploadLinkRequest`, `language`, `IGetOptions`, `Plugin.Shared.Definition`, `LogicalExpression`, `ParserSourceFileContext`, `MyContext`, `QueryArg`, `StreamPipeOptions`, `JestEnvironmentGlobal`, `SyncSubject`, `NameSpaceInterfaceImport.Interface`, `ODataOptions`, `MathToSVGConfig`, `IHubContent`, `express.Application`, `NgxSmartLoaderService`, `TableRow`, `LoadingEvents`, `AutocompleteSelectCellEditor`, `Accessibility.SeriesComposition`, `UserRegister`, `ComponentStatus`, `IAstBuilder`, `TwilioServerlessApiClient`, `BitcoinjsKeyPair`, `ItemController`, `DaffCategoryFilterEqualOption`, `VerifyErrors`, `OutputDataSource`, `ModbusTransaction`, `NodeAndType`, `KanjiNode`, `TypedObject`, `Concept`, `CarSpec`, `DidChangeTextDocumentParams`, `SensorObject`, `IExecutionQueue`, `IOrganizationCreateInput`, `LogProperties`, `DirectoryResult`, `Pouchy`, `ContainerModel`, `requests.ListQuotasRequest`, `UseLazyQuery`, `CanvasTexture`, `PointerEventInit`, `UserAdministrative`, `ClarityAbiFunction`, `ICreateSessionOptions`, `QueryStart`, `FormService`, `ValidationFn`, `MetaTransaction`, `Parameterizer`, `Oracle`, `ColorDirection`, `ISelectedEmployee`, `BudgetItem`, `Physics2DServer`, `MatSlideToggleChange`, `SendTransactionOptions`, `IDialogConfiguration`, `UserAuth`, `ConcreteTaskInstance`, `FlatNode`, `ScrollEventData`, `SimpleToast`, `HeaderMapTypeValues`, `Throttler`, `SetInstallPrompt`, `ActivityItem`, `XI18nProperty`, `VertexLabels`, `CanaryExecutionResult`, `SelectionScopeRequestOptions`, `Required`, `VirtualNetworkGateway`, `WaterfallStepContext`, `T.Component`, `NumberRange`, `Deno.Addr`, `AnnotationService`, `EqlCreateSchema`, `StyledElementLike`, `BitcoinStoredTransaction`, `monaco.languages.IState`, `dia.Element`, `RoutesService`, `TKind`, `BigIntInstance`, `InfluxDB`, `CustomWorld`, `ErrorTypes`, `HistoryItemImpl`, `MlJobWithTimeRange`, `workspaces.WorkspaceDefinition`, `CollectDeclarations`, `ArXivStorage`, `k8s.types.input.core.v1.PodTemplateSpec`, `SettingsRepository`, `SessionStateControllerTransitionResult`, `BlockStatement`, `GetSampledRequestsCommandInput`, `Auto`, `InitializeSwapInstruction`, `HalfEdge`, `TwingTest`, `MockEvent`, `ConfigState`, `CheckSavedObjectsPrivileges`, `Guid`, `Spinner`, `RootSpan`, `TableAliasContext`, `requests.ListAvailableWindowsUpdatesForManagedInstanceRequest`, `PersistedStateKey`, `StructureNode`, `ListNotificationsRequest`, `EvaluateCloudFormationTemplate`, `UpdateProjectCommandInput`, `DemographicsGroup`, `ComboBox`, `requests.ListScheduledJobsRequest`, `CommandInterface`, `UpdateWriteOpResult`, `By`, `FilePickTriggerProps`, `WalkResult`, `Orderbook`, `AbstractCamera`, `CheckRun`, `t.Expression`, `FocusEvent`, `ThemeState`, `IBoxProps`, `RenderView`, `DeployUtil.ExecutableDeployItem`, `DataViewComponentState`, `StoreCreator`, `MergeTree.TextSegment`, `IAlert`, `SettingEntity`, `MyClass`, `BIterator`, `ITimerToggleInput`, `Newsroom`, `EncryptedShipCredentials`, `XTransferSource`, `DataId`, `CalendarDateInfo`, `DaffCartItem`, `LocalPackageInfo`, `MVideoFullLight`, `SparseVec`, `WebGLRenderingContext`, `UserAnnotationSet`, `InternalCoreUsageDataSetup`, `MappingsEditorTestBed`, `CmsGroup`, `Launcher`, `MyEThree`, `listenTypes`, `ICoverageCollection`, `ListJobsByPipelineCommandInput`, `AndroidActivityBackPressedEventData`, `ThemeService`, `ActivityFeedEvent`, `ControllableEllipse`, `BMapGL.Point`, `DescribeConnectionsCommandInput`, `TitleProps`, `AfterCombatHouseCardAbilitiesGameState`, `ContractData`, `ChangeMap`, `IBlockchainProperties`, `AuthorizeOptions`, `FunctionContext`, `EnvelopeListener`, `Events.kill`, `ora.Ora`, `RepositoryStateCache`, `Justify`, `ShapeInstanceState`, `QuickInputButton`, `DeleteRepositoryResponse`, `WFWorkflow`, `CallOptions`, `MaybeType`, `OptionsWithMeta`, `QueryDefinition`, `DatabaseTransactionConnection`, `CollectionTransaction`, `FirstConsumedChar`, `IndexingConfig`, `FieldFilter`, `WyvernSchemaName`, `DinoContainer`, `FakeCard`, `UpdateAccountRequest`, `AstVisitor`, `UpdateDeploymentCommandInput`, `ClassSymbol`, `files.FullFilePath`, `StylableResolver`, `JsonDocsMethod`, `Draft`, `ColorString`, `CoreSavedObjectsRouteHandlerContext`, `LexDocument`, `PartyLeader`, `TabularLoaderOptions`, `LegendItem`, `DataTable.CellType`, `ir.Stmt`, `IParameterTypeDefinition`, `GfxTexture`, `TextEditChange`, `ICloudFoundryCreateServerGroupCommand`, `IntersectParams`, `Views.View`, `INgWidgetPosition`, `IUpworkDateRange`, `IDeviceInterface`, `GetFieldsOptions`, `ChatModule.chatRoom.ChatPubSubMessage`, `ConventionalCommits`, `GameStateModel`, `BuildFailure`, `CreateIndexNode`, `IDescriptor`, `FieldParser`, `CompilerIR`, `EventDispatcherEntry`, `RemoveSourceIdentifierFromSubscriptionCommandInput`, `IChange`, `IUpdateOrganizationCommandArgs`, `VirtualElement`, `LoaderService`, `ast.AbstractElement`, `SpriteSheet`, `RequestApprovalEmployee`, `GraphQLQueryBuilder`, `DialogContextValue`, `CachedProviders`, `EventFilter`, `ESLAnimateConfigInner`, `TokenizerOutput`, `AccessToken`, `MyUnion`, `RtcpHeader`, `EvaluationFunction`, `Comparer`, `dom.Node`, `React.Context`, `FlowsenseCategorizedToken`, `IndyLedgerService`, `MediatorMapping`, `DSpaceObjectDataService`, `DatabaseFacade`, `ExternalAuthenticateResultModel`, `ListApplicationsRequest`, `Monoid`, `TikTokScraper`, `RouteLocationNormalized`, `SpriteEffect`, `InstalledPlugin`, `ThySlideService`, `StorageTransformPlugin`, `SwitchIdInfo`, `IHash`, `ColumnExtension`, `OriginConnectionPosition`, `StatefulSearchBarDeps`, `Cancellation`, `CommandDescriptor`, `GameVersion`, `AddAsTypeOnly`, `XMLBuilderState`, `IOrganizationVendor`, `XPath`, `JitsiLocalTrack`, `TestFunctionImportComplexReturnTypeCollectionParameters`, `CohortCreationState`, `RelativeTimeFormatOptions`, `TokenBurnV1`, `ServiceStatusLevel`, `ChapterStatus`, `QueryCacheEntry`, `J3DLoadFlags`, `CreateTRPCClientOptions`, `GridCellValue`, `IMyChangeRequestItem`, `IServerResponse`, `OrSchema`, `DecltypeContext`, `IObjectDefinition`, `ViewPortService`, `MAL`, `TSPass`, `NzDestroyService`, `DeploymentTargetConfig`, `coord`, `HostCreatedInstance`, `ITestState`, `ResourceRequirement`, `TReference`, `ParserPlugin`, `ResEntry`, `IssuesListCommentsResponseItem`, `FixedTermLoan`, `LiftedStore`, `RuleExpr`, `AnimationKeyframe`, `WgConfigFile`, `WalletKeys`, `nodenCLContext`, `FilterProps`, `ts.ScriptTarget`, `Subscribable`, `IAuthContext`, `NodeWithPos`, `SanitizedProtonApiError`, `IVSCodeWebviewAPI`, `OrderImpl`, `INumberFilter`, `BeancountFileService`, `ScaleString`, `DayHeaderWrapper`, `ListPortalsCommandInput`, `YAMLMap`, `TransactionConfig`, `IDetachable`, `TsConfigSourceFile`, `IRedisOptions`, `UI5SemanticModel`, `PlanGraph.Entities.GraphData`, `CustomTag`, `MaxVersions`, `BeButtonEvent`, `PrivateKey`, `ShowNewVisModalParams`, `SelectorFn`, `NodeEvaluateResult`, `RequesterMap`, `FsUri`, `NameIdentifier`, `Spatial`, `tf.io.TrainingConfig`, `IConnectionInfo`, `Font`, `XYZSizeModeValues`, `MessagingPayload`, `AvailabilityZone`, `XMessageService`, `AsyncSettings`, `CreateConfigurationSetEventDestinationCommandInput`, `HTMLProgressElement`, `ImmutableMap`, `DAL.DEVICE_NULL_DEREFERENCE`, `ConnectionPositionPair`, `Associativity`, `ParameterDetails`, `PaymentOpts`, `Any`, `requests.ListBootVolumeAttachmentsRequest`, `PathMatch`, `SearchInWorkspaceWidget`, `Idea`, `WrappingCode`, `message`, `ClassMemberLookup`, `FsReaddirItem`, `GuildChannel`, `AngularFirestoreCollection`, `ITelemetryLoggerPropertyBags`, `SubtleCrypto`, `ExpressionContainer`, `CreateStateHelperFn`, `HookHandler`, `DefinitionYAMLExistence`, `IEntityGenerator`, `Incoming`, `parser.Node`, `Xform`, `CircularQueue`, `OpenCommand`, `Dree`, `PostgresInfo`, `LabelProvider`, `MssEncryption`, `ProgramOptions`, `TransformNode`, `NavigationGraphicsItem`, `SinglePointerEvent`, `IFabricEnvironmentConnection`, `IRoomData`, `requests.ListComputeGlobalImageCapabilitySchemasRequest`, `TensorContainer`, `TargetDetectorRecipeDetectorRuleSummary`, `d.RobotsTxtResults`, `ContextFlags`, `IModelContentChangedEvent`, `LocalizeFunc`, `NewDeviceDTO`, `ITelemetryBaseEvent`, `PseudoClassSelector`, `AtomGetter`, `SerializedAnalysis`, `FovCalculation`, `QuestaoModel`, `FractionalOffset`, `Uint64`, `RepoSideEffectPendingExpectation`, `EnumNodeAndEdgeStatus`, `Type_AnyPointer_ImplicitMethodParameter`, `NamedProblemMatcher`, `ExtendedFloatingActionButton`, `RadixAID`, `IFormItemProps`, `IJetApp`, `CreateAudioArgs`, `IUploadedFile`, `requests.ListTaggingWorkRequestsRequest`, `active`, `MockDeploy`, `ArgMap`, `BlockNumber`, `BottomNavigationBar`, `DevServerService`, `ColumnPropertyInternal`, `WebviewPanelImpl`, `ScopeTransform`, `ProductsService`, `CatchClause`, `TracingBase`, `Dimension`, `RTCSctpTransport`, `SSHExecCommandResponse`, `ChatMessageReceivedEvent`, `IMonthAggregatedEmployeeStatisticsFindInput`, `Phaser.Types.Core.GameConfig`, `Mathfield`, `DataQueryRequest`, `TextEditor`, `ConvLayerArgs`, `ProductReview`, `SpringChain`, `StandardAuthorization`, `CascaderOption`, `MethodDeclarationStructure`, `VisualizationListItem`, `SVGVNode`, `DrilldownState`, `ReplacePanelActionContext`, `HubConfigInterface`, `BindingType`, `OrderBook`, `DoorLockLoggingCCRecordReport`, `DetectedCronJob`, `ContractOperationCallback`, `PrimitiveTypeDescription`, `ChainablePromiseElement`, `NamedStyles`, `Property`, `SchemaFunctionProperty`, `MSGraphClient`, `EventPartialState`, `LoginSession`, `IGatewayMember`, `UnitSystemKey`, `MonadIO`, `TSubscribeHandler`, `SendProp`, `IChangeEvent`, `TypeDBTransaction.Extended`, `RepositoryKind`, `InternalBulkResolveParams`, `SFCBuildProps`, `UICollectionDelegateImpl`, `X12Element`, `Outcome`, `OverviewTestBed`, `TopologyData`, `AwsTaskWorkerPool`, `StorageObjects`, `FindEslintConfigParams`, `RefactorBuffer`, `ImportsMetadata`, `TransformationMatrix`, `ProcessStageEnum`, `RemoveTagsFromResourceCommandInput`, `AliasOrConnection`, `StorageService`, `BlogService`, `Rights`, `Timespan`, `ResolverOptions`, `SavedObjectOptionalMigrationFn`, `ITagSet`, `ITypeMatcher`, `EqState`, `MessageBuffer`, `UpdateManager`, `Fn1`, `Part`, `SpecHelperConnection`, `IWarrior`, `LaunchRequestArguments`, `Html2OrgOptions`, `EdiDocumentConfigurationResult`, `CLLocationCoordinate2D`, `FootnotesItem`, `AddressDetails`, `requests.ListAutonomousDatabaseClonesRequest`, `LogFunctions`, `GqlExecutionContext`, `DashboardComponent`, `OutboundPackage`, `VFSRef`, `t.File`, `LocKind`, `DrawerNavigationState`, `SharedMatrix`, `UpdateResponseData`, `ContactList`, `ModifyLoadBalancerAttributesCommandInput`, `BezierCurve3dH`, `WaterPoint`, `ICategoricalColorMappingFunction`, `SavedObjectType`, `TSESTreeToTSNode`, `CieloTransactionInterface`, `RtcpPacket`, `ModuleJob`, `IApprovalPolicyCreateInput`, `SinonFakeTimers`, `BotState`, `IQuizFull`, `Yendor.Tick`, `CollectorFetchContext`, `fromSingleRepositoryStatisticsActions.GetRepositoryStatistics`, `TickSource`, `androidx.transition.Transition`, `OuterExpressionKinds`, `TsProject`, `HeatmapConfig`, `Curried`, `GitHubIssue`, `QUICError`, `TypeResult`, `SolutionDetails`, `MeetingState`, `ISignal`, `ControlsProps`, `CustomRegion`, `TextWidthCache`, `SelectRangeActionContext`, `CachedMapEntry`, `PreparedFn`, `ApplicationTemplateAPIAction`, `BarDataSet`, `UtilProvider`, `HighlightInfo`, `SocketIoChannel`, `INodeType`, `EndpointInfo`, `PopulatedTransaction`, `RecentData`, `IPartitionLambdaFactory`, `SpanAttributes`, `GroupOrientation`, `JobResultDTO`, `Facade`, `IntNode`, `Range`, `ComputedScales`, `InputLayer`, `GeolocationService`, `FeedbackState`, `AjaxAppender`, `Events.start`, `CreateTypeStubFileAction`, `ConsoleLogger`, `UserAccountID`, `ConfigurationProperty`, `DataAccess`, `GetterHandler`, `RenderDeps`, `IBuildApi`, `AuthToolkit`, `WorkspaceConfig`, `ItemIndex`, `FeatureValue`, `KbnFieldType`, `AtomizeNecessary`, `MatchersUtil`, `CredentialPreview`, `ExpShapeConst`, `MlCapabilities`, `HTMLHtmlElement`, `ApplyHandler`, `Replica`, `Espree`, `IpcMainEvent`, `IResponseMessageHandler`, `MessageItem`, `EthereumClient`, `ParameterGroup`, `MatTabChangeEvent`, `GeoCoordinatesLike`, `PickerProps`, `PDFAcroCheckBox`, `ArianeeWallet`, `ElectronRoleCommand`, `SuggestionsComponentProps`, `ChatAdapterState`, `Monitoring`, `InstancePool`, `DeleteScheduledActionCommandInput`, `Overloads`, `ComponentWithMeta`, `SignInOutput`, `RawAlertInstance`, `ConsoleMessage`, `PSIChar`, `RulesTestEnvironment`, `VisualizeFieldContext`, `ts.SetAccessorDeclaration`, `NucleusVersion`, `InheritedCssProperty`, `StreamState`, `TableViewModel`, `HTMLCanvas`, `IStepAdjustment`, `TimeoutError`, `ISampleDescription`, `ts.SourceFile`, `OrbitDefinition`, `PostgresAdapter`, `ScrollOptions`, `FieldHierarchy`, `chromeWindow`, `Dynamic`, `DbPatch`, `StatelessOscillator`, `EmitHelper`, `DaffAuthToken`, `ts.PropertyAccessExpression`, `q.Message`, `FailureDetails`, `SavedSearch`, `CollapsedFormatField`, `ResolvedSchema`, `CampaignTimelineChanelPlayersModel`, `DragObjectWithType`, `InternalSymbol`, `JSDocUnionType`, `PlainValue`, `SUUID`, `InterfaceWithCallSignature`, `ObservablePoint3D`, `ExperimentStateType`, `OpenYoloProxyLoginResponse`, `RawError`, `SwaggerConfig`, `BanList`, `NzCascaderOption`, `IStoreState`, `FormConfig`, `ICkbMint`, `ExchangeAccount`, `IMDBVertex`, `AceConfigInterface`, `XmlFile`, `MergeTree.PropertySet`, `GunGetOpts`, `Value`, `MakeErrorMessage`, `RadixAddress`, `InlineComposition`, `MediaDeviceInfo`, `IUploadAttributes`, `I80F48`, `vscode.ConfigurationTarget`, `VerifyEmailAccountsRequestMessage`, `GPattern`, `LeafonlyBinaryTree`, `HarperDBRecord`, `ViewBase`, `RemovableAtom`, `InsertUpdate`, `GitHubRepository`, `IAckedSummary`, `CacheStore`, `DynamicArgument`, `LocalActorSystemRef`, `PreviousSpeakersActions`, `PhysicsBody`, `InjectedQuery`, `TodoTxtTask`, `IReserveUpdateValues`, `Cache`, `MutableVector3`, `OceanSpherePoint`, `DefaultViewer`, `ConnectionAction`, `SpeechConnectionMessage`, `ISurveyCreatorOptions`, `UnitConversion`, `MiddlewareContext`, `CodeGenDirective`, `Meter`, `PluginViewWidget`, `ethereum.UnsignedTransaction`, `ResourceHolder`, `fieldType`, `VariableDefinitions`, `GetOrganizationParams`, `ChangedData`, `TOperand`, `DbDrop`, `ModulesContainer`, `StartServicesGetter`, `interfaces.CommitType`, `IImageryMapPosition`, `BuiltRule`, `LPStat`, `CommentInfo`, `LookupExpr`, `StylableMeta`, `Connection`, `DMMF.OutputType`, `CompilationError`, `Cards`, `Package`, `MockERC20Instance`, `TokenTypes`, `ReducerState`, `HandshakePacket`, `IEffectExclusions`, `DeclarationCache`, `ASN`, `HeroCollection`, `SearchInWorkspaceFileNode`, `ErrorValue`, `ActionMeta`, `SubscriptionLike`, `SemanticContext`, `IModelTemplate`, `requests.ListApplianceExportJobsRequest`, `XHRBackend`, `APIQuery`, `NxData`, `IterationService`, `UIntTypedArray`, `WorkItem`, `TAtrule`, `InitUI`, `GenerateSWOptions`, `DbSchema`, `StatusPresenceEvent`, `tBootstrapArgs`, `Types.EventName`, `AuthVuexState`, `ItemRepository`, `IDBKeyRange`, `IShape`, `CommandManager`, `IGitRemoteURL`, `AdtHTTP`, `httpm.HttpClient`, `J`, `FunctionPlotOptions`, `RequestSigner`, `StructsLib1.InfoStruct`, `AgeOrForm`, `OperationDefinitionNode`, `CardFinder`, `Product`, `FieldFormatParams`, `JSONValue`, `FirebaseMock`, `DebugInfo`, `ModelJSON`, `RotType`, `DeleteApplicationCloudWatchLoggingOptionCommandInput`, `EveesMutationCreate`, `UserItem`, `UIScrollView`, `FuncInfo`, `Customer`, `BarPrice`, `ListDevicesCommandOutput`, `DatabaseUsageMetrics`, `AuthorisationService`, `FeatureType`, `VectorStylePropertiesDescriptor`, `AssetReferenceArray`, `DocumentColorParams`, `TextureMapping`, `Renderer`, `AnnounceNumberNumberCurvePrimitive`, `BuildHandlerOptions`, `IStatusProvider`, `IReactComponentProps`, `PIXI.Texture`, `ICkbBurn`, `MatDrawer`, `VerificationClientInterface`, `CreateKeyPairCommandInput`, `ElevationRange`, `MonitoringParametersOptions`, `ISupportCodeExecutor`, `NodeVM`, `PositionOptions`, `ERC20FakeInstance`, `OnPreRoutingResult`, `LoaderInstance`, `SpaceBonus.DRAW_CARD`, `onChunkCallback`, `AssessmentItemController`, `LogoImageProps`, `ParserErrorListener`, `requests.ListLogsRequest`, `ArcGISAuthError`, `TransactionReducerResult`, `SupCore.Data.ProjectManifestPub`, `ShownModallyData`, `DeleteManyResponse`, `IWorkspaceDir`, `HKDF`, `Explorer`, `ReadonlyMap`, `TaskEvent`, `ExecutionContext`, `Season`, `TPluginsSetup`, `ReflectCreatorContext`, `ReportTaskParams`, `IKeyObject`, `ConditionalStringValue`, `track`, `OverridedMdastBuilders`, `VsCodeApi`, `IJsonRpcRequest`, `DeleteBackupResponse`, `$p_Expression`, `FailedAttemptError`, `ts.LiteralTypeNode`, `IHttpPromise`, `DirectoryItem`, `ILibraryResultState`, `TreeNodeGroupViewModel`, `TrackOptions`, `Stack`, `FieldDetails`, `LocationSet`, `CodeSpellCheckerExtension`, `TableStorageContext`, `RendererResult`, `WechatyPlugin`, `SpatialImagesContract`, `StepExecution`, `VueSnipState`, `TabsProps`, `CorsRule`, `CardRequirement`, `DistanceMap`, `CommandStatus`, `requests.ListCrossConnectLocationsRequest`, `FieldPath`, `FunctionBuilderArguments`, `XSLTokenLevelState`, `lsp.Diagnostic`, `StripeService`, `ListOption`, `CommandFunction`, `IT`, `DeltaAssertions`, `SpawnerOrObservable`, `AnimationPosition`, `FormFieldsType`, `ReferenceEntry`, `NetworkListenerArgs`, `ColumnOptions`, `FilteringPropertyDataProvider`, `requests.ListEdgeSubnetsRequest`, `CategorizedMethodMemberDoc`, `WeakMap`, `UIntCV`, `Register8`, `EllipseProps`, `angular.IHttpPromise`, `WorkflowEntity`, `AreaFormType`, `AnnotatedFunctionABI`, `APIEndpoint`, `Indexed`, `TypeResolution`, `IConstrutor`, `SwitchApplicationCommand`, `ClientSession`, `PendingResult`, `AlertAccentProps`, `SnapshotGenerator`, `KeyValType`, `CreateConfig`, `OverlayInitialState`, `MyResource`, `SelectTokenDialogEvent`, `IceTransport`, `SFPage`, `CredentialCache`, `TranslationChangeEvent`, `StateMethodsImpl`, `RequiredOrOptional`, `Descriptor`, `BaseGraphRewriteBuilder`, `CertificateAuthorityConfigType`, `Logout`, `GenerateOptions`, `NonEmptyString`, `OmvFeatureFilter`, `FileSchemaKey`, `DataBuckets`, `FluentUITypes.IDropdownOption`, `RestFinishedResponse`, `ScenarioData`, `JSONInMemoryCache`, `ConfigurationFile`, `ErrorDetailOptions`, `BScrollFamily`, `DLLData`, `MetricSet`, `NodeCheckFlags`, `DefaultTheme`, `PianoService`, `VdmEntity`, `QuickPickOptions`, `FilterNode`, `DepositAppState`, `PreviewProps`, `SourceString`, `Mars.NumberLike`, `ScheduledEventRetryConfig`, `messages.IdGenerator.NewId`, `PageCollection`, `Editor`, `LambdaEvent`, `SpeechCommandRecognizerResult`, `HsSaveMapService`, `TLineChartPoint`, `AgentIdentity`, `TabChangeInfo`, `TCalendarData`, `Advisor`, `ApiSettings`, `QueryStateChange`, `MockRequest`, `TransferedRegisterCommand`, `DistinctPoints`, `ListChannelsModeratedByAppInstanceUserCommandInput`, `OpenChannel`, `CalendarsImpl`, `WorkerMsgHandler`, `ObservableOption`, `ENV`, `OpenSearchDashboardsDatatableColumn`, `ImageSpecs`, `Cost`, `ElementComponent`, `PhysicalElementProps`, `UnsignedMessage`, `KeyframeIcon`, `Annotations`, `StyleAttribute`, `IEventLogService`, `MatrixType`, `Prisma.JsonValue`, `Apify.RequestQueue`, `Visual`, `KernelMessage.IOptions`, `AttributeDecoratorOptions`, `tmp.DirectoryResult`, `D.State`, `FacemeshOperatipnParams`, `TypeParameterDeclaration`, `ReadonlyObject`, `CSReturn`, `IRecurringExpenseEditInput`, `ReadableByteStreamOptions`, `HapiResponseToolkit`, `Register32`, `VisualDescriptor`, `IMouseEvent`, `ObjectDeclaration`, `UpdateParameterGroupCommandInput`, `DaffCategoryFilterRangePair`, `RequestManager`, `ajv.ErrorObject`, `TimelineLite`, `Precision`, `jasmine.CustomReporterResult`, `Types.PluginOutput`, `TransmartPackerHttpService`, `MockContainerRuntimeFactory`, `TagResourceOutput`, `VideoFileModel`, `IndexPatternFieldMap`, `PropertyResolver`, `GShare`, `scribe.Config`, `api.ISnapshotTree`, `DatabaseV2`, `InstallProfile`, `Timeslice`, `ManagedDatabase`, `WatchDirectoryFlags`, `AngularExternalResource`, `NodeProtocolInfo`, `SymbolMetadata`, `events.Args`, `ExpanderQuery`, `IOrganizationProject`, `AbstractView`, `LogResult`, `FixableProblem`, `SceneState`, `RxCacheService`, `DescribeEventsCommand`, `IiOSSimulator`, `IImage`, `ValidationExceptionField`, `LogViewer`, `ForwardInfo`, `SteemConnectProvider`, `IValidBranch`, `BarColorerStyle`, `ContainerPassedProps`, `TopNavConfigParams`, `IpcRendererListener`, `TreeItemIndex`, `INodeExecutionData`, `$Promisable`, `GenericAction`, `Quickey`, `ListWebhooksCommandInput`, `CompilerSystemRemoveFileResults`, `IContact`, `AvailabilityTestConfig`, `InstrumentationConfig`, `FullPath`, `TaskParameters`, `MathOptions`, `DecodedMessage`, `MoveAction`, `TimelineBucketItem`, `IExternalPrice`, `OutputTargetDist`, `IProvider`, `ColorTokens`, `IDataRow`, `ApplicationCommandOptionData`, `Inversion`, `DeeplyMockedKeys`, `ExtendedCluster`, `EdgeSnapshot`, `ListStudiosCommandInput`, `IParserConfig`, `JoinStrategy`, `ExceptionConverter`, `StaticSiteARMResource`, `CreateInstanceCommandInput`, `CacheKey`, `HSD_TExp`, `AnimationsService`, `InsertOptions`, `TypeModel`, `OneOrMore`, `MagicSDKError`, `RepositoryModel`, `RecordDataIded`, `SecurityAlertPolicyName`, `HLTVConfig`, `ExerciseService`, `ElementNames`, `SinglesigAddressType`, `ParseString`, `Mount`, `BlockParameter`, `ModeType`, `DiskAccess`, `AttributeSelector`, `RawData`, `SkeletonShapeProps`, `DataProviderProxy`, `SearchParamsMock`, `RuleDeclaration`, `CancelQueryCommandInput`, `vscode.DocumentSelector`, `ITccProfile`, `IFBXRelationShip`, `PluginInterface`, `SourceDocument`, `HammerManager`, `BaseVisType`, `ICharacter`, `GameStartType`, `DayFitnessTrendModel`, `ResolvedRecordAtomType`, `ServerResponse`, `ValueAndUnit`, `BlockchainPropertiesService`, `ADTClient`, `ArrayValue`, `JupyterLab`, `Immutable.Map`, `ParsedUrlQuery`, `NeverType`, `NVNode`, `ParsedDevModuleUrl`, `SFSchemaEnum`, `Audit`, `IssuePayload`, `ShoppingCartContextValue`, `PrimitivePolygonDrawerService`, `StyleObject`, `PagedResult`, `MultiSigSpendingCondition`, `XMessageBoxService`, `ToolAssistanceInstruction`, `MemoryUsage`, `backend_util.BackendValues`, `SchemaVisitorFactory`, `AssetBindings`, `VdmMappedEdmType`, `BibtexAst`, `RouteGroup`, `DaffError`, `GalleryItem`, `MousePressOptions`, `Aurelia`, `UseMutation`, `io.LoadOptions`, `PoiTableEntry`, `DaffProductServiceInterface`, `ModalHelperOptions`, `JoinRow`, `ProblemMatcher`, `CreateJobResponse`, `GuideData`, `BedrockFileInfo`, `ExtraData`, `ContractABI`, `TagListQueryDto`, `FactoryContext`, `ExtractorInput`, `ServerViewPageObject`, `ChatNode`, `MenusState`, `ApisService`, `EdgeDescriptor`, `ItemState`, `Deserializer`, `IBLEAbstraction`, `AbstractFetcher`, `TransactionRequest`, `requests.ListWaasPolicyCustomProtectionRulesRequest`, `AutorestContext`, `AwarenessUpdate`, `SchemaName`, `SigError`, `RequestArgs`, `TrackEntry`, `ReactFCNoChildren`, `MatchLeave`, `ts.TupleTypeNode`, `Probot`, `TileCoordinator`, `Student`, `TextRewriterState`, `IThemeWeb`, `IDatabase`, `FilePath`, `IClusters`, `InternalException`, `TxPayload`, `MockClass`, `angular.ui.bootstrap.IModalService`, `TypeOfContent`, `IndexTemplateMapping`, `Burst`, `TestMessages`, `ContainerType`, `Benchmark`, `AttributeValueType`, `vscode.Diagnostic`, `Key1`, `QBoxLayout`, `PersistenceManager`, `SymbolOriginInfo`, `Issue`, `ObjectSelector`, `RequestWithdrawalDTO`, `Align1`, `SearchCriteria`, `Operations`, `AsyncStateRetry`, `LanguageMatcher`, `ContractAbi`, `ICommit`, `StateHelper`, `IKeyEntry`, `Web3SideChainClient`, `CrochetTypeConstraint`, `DefaultBrowserBehavior`, `UserDocument`, `VariableArgs`, `RenderTreeEdit`, `TransactionList`, `StripeConfig`, `SetGetPath`, `RouteInitialization`, `NavItemProps`, `Redis.Redis`, `TypedLanguage`, `IExpectedSiop`, `d.Module`, `ConsoleInterceptor`, `IDataContextProvider`, `GfxrResolveTextureID`, `RetrieveResult`, `TDeclarations`, `TokenDetails`, `OverlayOptions`, `VueApolloSmartOpsRawPluginConfig`, `PiElement`, `PublicKeyData`, `IconifyAPIIconsQueryParams`, `ErrorThrower`, `DatasetManagerImpl`, `Filesystem.PackageJson`, `SFCBlockRaw`, `WrappedComponentRoot`, `LiteralSource`, `EditorModel`, `CallHierarchyIncomingCallsParams`, `React.ForwardRefExoticComponent`, `ISubmitEvent`, `Cdt`, `Simulation`, `ApolloSubscriptionElement`, `requests.ListStandardTagNamespacesRequest`, `GDQOmnibarListElement`, `ConcurrentModificationException`, `UserButton`, `MockStyleElement`, `IUnit`, `LinkModel`, `PublicPlayerModel`, `PeerTypeValues`, `ICallback`, `StateTimeline`, `MaterialButton`, `CommonInfo`, `LabwarePositionCheckStep`, `DiagnosticAction`, `SutTypes`, `ICategoryCollectionState`, `ColorSchemeId`, `SpyAsyncIterable`, `BuilderProgramState`, `TrainingConfig`, `PDFOptions`, `ManagedItem`, `MDCDialogCloseEvent`, `StatBlock`, `quat`, `MockAthena`, `AggregateBuilder`, `ExpressionFunctionVisualization`, `MaterialEditorOptions`, `AbstractSqlDriver`, `LatexAtom`, `MangoAccount`, `LibraryOptions`, `CurveFactory`, `ServerActionHandler`, `IDragEvent`, `JSHandle`, `AssertionContext`, `PlayerClass`, `SynthDefResultType`, `vscode.CompletionContext`, `WriteStorageObject`, `RawMatcherFn`, `ServiceHelper`, `HSLA`, `TestDisposable`, `Category2`, `XsltPackage`, `JPAResourceRaw`, `InstanceOf`, `CommandEnvelope`, `RadioComponent`, `XConfigService`, `IPricedItem`, `OutputTargetDocsVscode`, `JSheet`, `NineZoneStagePanelsManagerProps`, `StructurePreviewProps`, `BuildOnEventRemove`, `SelectionSource`, `IGetTimeSlotStatistics`, `ts.TextChangeRange`, `ISearchParams`, `IVocabularyItemWithId`, `TemplateBlock`, `PAT0_MatData`, `LineGeometry`, `ActOptions`, `PvsTheory`, `HelperOptions`, `DOMWindow`, `FileUploadState`, `ShortId`, `IDebtsGetByContactState`, `ProxySettings`, `ModuleWithComponentFactories`, `PluginConfigSchema`, `OptionTypeBase`, `XNotificationOption`, `EnteFile`, `d.ComponentRuntimeMeta`, `FieldMeta`, `EndpointName`, `TooltipState`, `TAccessQueryParameter`, `WorldgenRegistryHolder`, `SharingResult`, `RenderTexture2D`, `CAST_STRATEGY`, `DidOpenTextDocumentParams`, `TestItem`, `IFetchOptions`, `IUiSettingsClient`, `ColorInfo`, `LSAndTSDocResolver`, `PurchaseList`, `Parent`, `appleTransactions.Table`, `IniData`, `DescribeLoadBalancerAttributesCommandInput`, `DSONameService`, `LambdaCloseType`, `AdjustNode`, `OptimizeModuleOptions`, `Overlay`, `ChatService`, `RowSet`, `UserClients`, `ResultItem`, `SExp`, `GraphImpl`, `RestoreDBClusterToPointInTimeCommandInput`, `AsyncSnapshot`, `Firmware`, `DataViewFieldBase`, `UiPlugins`, `Url`, `ActionTypeRegistry`, `ReactorConfig`, `MessageDescriptor`, `HTMLButtonElement`, `XYZValuesArray`, `OpenYoloError`, `ResizableTest`, `ExecaChildProcess`, `IApiSnapshot`, `ReductionFn`, `SentimentValue`, `ComponentRuntimeMembers`, `IVirtualDeviceConfiguration`, `IEditorTracker`, `DynamicFlatNode`, `ToastProps`, `ScreepsReturnCode`, `MirroringHost`, `IToken`, `RenderedSpan`, `RuntimeFn`, `PeriodInfo`, `TagComponent`, `PluginVersionsClient`, `Axial`, `PathParamOptions`, `IReportingRule`, `PointTuple`, `CUUID`, `YACCDocument`, `TupletType`, `IAnimal`, `ResolvedConfigFilePath`, `BackgroundAnalysisBase`, `PreflightCheckNamespacesResult`, `FiddleSourceControl`, `ITerminalChunk`, `IBaseNode`, `UserApp`, `TableRefContext`, `MigrationFeedback`, `chrome.tabs.TabActiveInfo`, `AngularEditor`, `CSVDataset`, `CoinbasePayload`, `StoreGroup`, `VideoSettings`, `FlowType`, `BuildResult`, `MethodDecorator`, `Unpacked`, `HostConfig`, `ReduxRootState`, `AppInsightsQueryResultTable`, `ApplicativeHKT`, `IUnitProfile`, `TableAccessByRowIdStep`, `KeyID`, `TransportMessage`, `RSS3List`, `sdk.Conversation`, `WebsiteScanResult`, `Arity2`, `CubicBezier`, `GestureType`, `LineSeriesStyle`, `SourceFileNode`, `AutorestConfiguration`, `ChapterData`, `AimEvent`, `ClientTag`, `CameraGameState`, `EnumDictionary`, `ExampleMetadata`, `puppeteer.Page`, `LambdaAction`, `GaugeAction`, `Fermata`, `ClientBuilder`, `GrammaticalGender`, `PickerColumnOption`, `TodoList`, `InlineField`, `TransactionEventArgs`, `ICXGenericResult`, `BareFetcher`, `ScopedSlotReturnValue`, `ApplicationListener`, `MockXMLHttpRequest`, `ServerHelloDone`, `EqualsGreaterThanToken`, `StatePages`, `QueryInput`, `ActionFunctionAny`, `OrganizationsClient`, `NodeClient`, `ActionHandlerContext`, `SVGVNodeAttrs`, `HubIModel`, `builder.Session`, `WebGLUniformLocation`, `BlobTestServerFactory`, `SequenceNode`, `ERC20`, `FrameNote`, `CreateEventSubscriptionResult`, `UICollectionViewDataSourceImpl`, `WebGLEngine`, `MyService`, `VerifiedToken`, `MixerCommunicator`, `NSRange`, `backend_util.Conv2DInfo`, `Web3ProviderEngine`, `TableRecord`, `p5ex.SpriteArray`, `CarouselProperties`, `FunctionTypeNode`, `ImageCacheItem`, `TelemetryService`, `EventDef`, `LegacyWalletRecord`, `IfStatement`, `Color3`, `NgrxJsonApiStoreQueries`, `choices`, `RestConfigurationMethod`, `KeyInput`, `DAOcreatorState`, `Thought`, `CohortPatient`, `CarImage`, `PouchDB.Database`, `IJsonRPCError`, `NetworkEdge`, `ProdutoDTO`, `GestureDetail`, `Observations`, `ChipService`, `YT.SuggestedVideoQuality`, `ITestResult`, `ListTournamentsRequest`, `MultipleInterfaceDeclaration`, `IScopeData`, `FIRUser`, `KeyboardManager`, `HStackProps`, `IRgb`, `ListApmDomainsRequest`, `ElementDataset`, `VirtualEditor`, `a.Type`, `jest.Mock`, `SlmPolicy`, `DocHandler`, `RankedTester`, `VectorEntry`, `BaseProvider`, `AbstractCommandDescriptor`, `WNodeFactory`, `FullFilePath`, `WholeStoreState`, `THREE.BufferAttribute`, `ListFriendsRequest`, `LexerResult`, `FirebaseServiceNamespace`, `WrapConfig`, `ITargetFilter`, `ICallsGetByContactState`, `ContractInterfaces.Market`, `ICalDateTimeValue`, `SystemData`, `SampleDataType`, `CharacterStore`, `ITimelineGroup`, `Obj`, `QuestionType`, `IpcSender`, `EnhancedTestStore`, `GetAttendeeCommandInput`, `esbuild.BuildResult`, `NonThreadGuildBasedChannel`, `Owner`, `CannedMarket`, `DescribeBackupsCommandInput`, `SimpleLogger`, `LangiumServices`, `EllipticCurves`, `MockContainerRuntimeForReconnection`, `SmartBuffer`, `IAnimation`, `IValidatorOptions`, `TestApi`, `ComponentTemplateDeserialized`, `DialogContextOptions`, `StreamService`, `xlsx.CellObject`, `RcModuleV2`, `ConeSide`, `FlashSession`, `EDerivativeQuality`, `SerializedSlot`, `FilterConstructor`, `BLAKE2s`, `MessageEvent`, `ODataStructuredTypeFieldParser`, `Nature`, `ClaimantInfo`, `ImporterRegistry`, `EntityInterface`, `ArtworkData`, `Engine`, `Definitions`, `LogSplitLayout`, `TypeESMapping`, `PlayerService`, `LockStepVersionPolicy`, `Free`, `EVMParamValues`, `SideMenuState`, `MDCSemanticColorScheme`, `UpdateConfigurationSetEventDestinationCommandInput`, `Accumulator`, `AppFilters`, `ListCtor`, `RaceCancellation`, `ProjectImage`, `SyncedRef`, `AccountsServer`, `GitDiff`, `Implementation`, `ObjectLiteral`, `ElementRef`, `TargetedMouseEvent`, `SigningKey`, `Kubeconfig`, `StatsTable`, `SFTPWrapper`, `CommandClassOptions`, `MapAnchors`, `RouterData`, `MockWrapper`, `GX_VtxAttrFmt`, `EvalParam`, `TreeProps`, `MarkdownSection`, `PolySynth`, `ITabInternal`, `IFS`, `ListUsersCommandOutput`, `Leaf`, `ErrorFn`, `SqrlExecutable`, `CohortService`, `FileTransportInstance`, `ToastState`, `EscapedPath`, `ExcerptToken`, `Middleware`, `PlacementStrategy`, `ControlPointView`, `GridApi`, `CircuitGroup`, `Parse.User`, `Softmax`, `PopupUtilsService`, `DurationUnit`, `AccountModel`, `Area`, `NameAndContent`, `AddTagsToResourceCommandOutput`, `BufferTypeValues`, `d.DevClientConfig`, `StoredReference`, `WebSiteManagementModels.FunctionEnvelope`, `com.github.triniwiz.canvas.ImageAsset.Callback`, `LoggerTask`, `NumberFilter`, `TableEntry`, `LeaderboardEntry`, `IAddAccountState`, `NodeMaterialBlock`, `InitiatingTranslation`, `ContractDeployOptions`, `ThresholdCreateSchema`, `TimeGranularity`, `IRawDiff`, `ResponseEnvelope`, `NgrxFormControlId`, `NumberFormat.UInt32LE`, `ThyDragStartEvent`, `CoinPrimitive`, `OnPreResponseInfo`, `PolkadotConnection`, `GLTF2.GLTF`, `NumberRenderer`, `Environment_t`, `requests.ListPingMonitorsRequest`, `APIGatewayProxyResult`, `InternalVariation`, `BurnerPluginContext`, `Main.LogScope`, `PostfixUnaryOperator`, `Dev`, `KillRing`, `IUniform`, `Express.Response`, `ConfigConfig`, `CharacterInfo`, `VisualProperties`, `ThyGuider`, `CommandBus`, `IType`, `MinMax`, `OF.IDropdownOption`, `GeocodeQueryInterface`, `ResolveImportResult`, `DSVRowString`, `TronTransactionInfo`, `RetryLink`, `ProviderEventType`, `TransformParams`, `DOption`, `Toggle.Props`, `AutorestSyncLogger`, `ITxRecord`, `resourceI`, `SuggestMatch`, `CommonDivProps`, `HttpPayloadTraitsWithMediaTypeCommandInput`, `PresetOptions`, `OptionalCV`, `ConfigInterface`, `models.ArtifactItem`, `Windup`, `APIResource`, `CLM.AppDefinition`, `RRdomTreeNode`, `SmdDataRowModel`, `Vec2Term`, `UserEvent`, `PreviewSize`, `requests.ListObjectsRequest`, `NotifyFn`, `RestRequestOptions`, `FilterType`, `PoolInfo`, `RecognizerResult`, `NAVObjectAction`, `Release`, `ESMessage`, `CipherData`, `SourceDetails`, `CellKey`, `IListViewCommandSetListViewUpdatedParameters`, `TDeleteManyInput`, `PackageAccess`, `JSChildNode`, `ChainInfoInner`, `MarketCurrency`, `RouterExtensions`, `ILocationResolver`, `UrlGeneratorId`, `PostMessageStorage`, `District`, `LineSeries`, `DeleteSettingCommand`, `observable.EventData`, `TableBuilderComponent`, `SharedFunctionsParser`, `CompletionEntry`, `MaterialInstanceConfig`, `MIRStatmentGuard`, `tf.io.ModelArtifacts`, `Endian`, `UnsupportedTypeLog`, `PreActor`, `UnionShape`, `ComposedChartTickProps`, `StyledProps`, `Legend.Item`, `ng.IModule`, `StorableUrl`, `SeriesOptions`, `LoginResultModel`, `MDCContainerScheme`, `DebugNode`, `ListQueuesRequest`, `MapNode`, `IDropboxEntry`, `ComparisonNode`, `Browser`, `AudioInputDevice`, `UserApollo`, `AudioStreamFormat`, `BufferSource`, `CommonService`, `PartsModel`, `SemanticTokens`, `AccessorComponentType`, `StackCardInterpolationProps`, `theia.DocumentSelector`, `WorkRequestError`, `IDatasource`, `Status`, `MongoCallback`, `CLM.LogDialog`, `Repeater`, `IDiffStatus`, `DevError`, `ArrayPromise`, `IOperandPair`, `Arbitrary`, `UserDetailsQuery`, `ModuleSymbolMap`, `ArmFunctionDescriptor`, `AccessorNames`, `GitStore`, `memory`, `SocketHandler`, `GetIPSetCommandInput`, `d3.Selection`, `SeparatorAxisTest2D`, `RangeDataType`, `GlobalStore`, `DAL.KEY_COMMA`, `BrowseProductsFacade`, `MaterialRenderContext`, `OrderJSON`, `IMiddlewareEvent`, `ILoginResult`, `PostItem`, `IDocString`, `WishlistsDetailsPage`, `ActionSheetButton`, `ForceLightningLwcStartExecutor`, `HardhatUserConfig`, `LocalVideoStream`, `ValueTransformer`, `InspectorLogEntry`, `PatchObjectMetadata`, `CacheHandler`, `ChatPlugin`, `SimpleExpressionNode`, `StringFilterFunction`, `AnySchema`, `FlightType`, `XYBrushEvent`, `ElementProfile`, `DeployOrganizationStep`, `TokenInfo`, `DeflateWorker`, `TaskManager`, `ModuleSpec`, `FrameInfo`, `ITokenClaims`, `StaticBuildOptions`, `DashboardCollectorData`, `OptionsConfig`, `Tolerance`, `MatchExplanationTreeNode`, `MicrophoneIterator`, `TextBlock`, `SessionStorageSinks`, `FrameNavigation`, `TelemetryReporter`, `InvalidParameterException`, `egret.Point`, `MlRoute`, `MatrixMessageProcessor`, `ModuleContext`, `PutMessagesResultEntry`, `Lunar`, `AxeCoreResults`, `CompletedPayload`, `DisabledRequest`, `StreamData`, `HomePage`, `FeaturePrivilegeAlertingBuilder`, `Readable`, `IAlertProps`, `SupCore.Data.Entries`, `GQLResolver`, `TIntermediate1`, `SortedSetItem`, `DatasourceRef`, `DirectionDOMRenderer`, `DisplacementFeature`, `MarkdownContentService`, `QueryChannelRangeMessage`, `IPeacockSettings`, `ParamMap`, `FilterValueExpressionOrList`, `DotIdentifierContext`, `ViewportCoords`, `ChangesetIndex`, `MapConstructor`, `UiSyncEventArgs`, `ProviderRegistry`, `ISODateTime`, `OasRef`, `AppData`, `Wizard`, `DeleteOptions`, `PutObjectOptions`, `ParingTable`, `Monad3`, `Point2d`, `Shard`, `Groupby`, `KdfType`, `ActiveProps`, `DamageType`, `CommonMiddlewareUnion`, `InviteService`, `Scalar`, `IViewProps`, `FeedbackRecord`, `AnyValidateFunction`, `SdkDataMessageFrame`, `RarePack`, `IAttentionSeekerAnimationOptions`, `SessionTypes.Settled`, `IDocumentServiceFactory`, `RedPepperService`, `ServerDevice`, `SNSEvent`, `TelegramClient`, `ClaimingSolution`, `AllActions`, `QueryPointer`, `AnalysisContext`, `CalculateHistogramIntervalParams`, `PluginWriteAction`, `TestingLogger`, `LogAnalyticsSourcePattern`, `OffsetConnectionType`, `ng.IAngularEvent`, `GX.CompType`, `Record.Update`, `vsc.Uri`, `MatOption`, `MenuStateBuilder`, `SnippetVisibility`, `MapSavedObjectAttributes`, `RecordPatternItem`, `GraphSnapshot`, `RandomNormalArgs`, `FontSize`, `ZoneWindowResizeSettings`, `SingleRepositoryStatisticsState`, `api.IZoweTree`, `PlaceEntity`, `NotificationActions`, `Hmi`, `RSSFeed`, `AbstractTransUnit`, `AnyStandaloneContext`, `IHandlerAdapter`, `DocumentClassList`, `FogBlock`, `StateForStyles`, `ILoginOptions`, `actionTypes`, `ITerm`, `AlertExecutionStatus`, `CodeFixAction`, `TValPointer`, `FnN4`, `Voyager`, `VIdentifier`, `AssociationConfig`, `TestContractAPI`, `Intervaler`, `SuperTest.SuperTest`, `InputBoxOptions`, `APITweet`, `JsxOpeningElement`, `BundleManifest`, `RouterContext`, `TestUseCase`, `Fixed8`, `ConnectResponseAction`, `PlayerData`, `GADBannerView`, `DescribeCommunicationsCommandInput`, `TVLAnalysis`, `DbRefBuilder`, `UrlResolver`, `ConstructDataType`, `AzureFunction`, `RowViewRef`, `TopicChangedListener`, `TEBopType`, `FormConnectionString`, `TDynamicObj`, `InternalCorePreboot`, `OperatorOption`, `ReadReceiptReceivedEvent`, `BatchGetItemInput`, `FakeInput`, `RediagramGlobalConfig`, `TObject`, `SlateEditor`, `BindingTemplate`, `GitHubUser`, `DescribeLoggingOptionsCommandInput`, `ObjMap`, `OutputAdapter`, `Outside`, `StableToken`, `OutputBinaryStream`, `CombinedState`, `ContractOptions`, `IBlockchainEvent`, `TrialVisitConstraint`, `FileStatus`, `IOrder`, `NotificationChannelServiceClient`, `IWindow`, `TransformingNetworkClient`, `StringSchema`, `CheckableElement`, `HomeAssistantMock`, `UniversalAliasTable`, `SurveyForDesigner`, `optionsType`, `MySpacesComponent`, `MemberId`, `IList`, `CanvasTypeProperties`, `UntagResourceResponse`, `RnPromise`, `PathCandidate`, `Change`, `CollectionOptions`, `pw.Page`, `ChatNodeVM`, `IncomingDefault`, `FieldUpdate`, `TeamProps`, `PRNG`, `RouteChildrenProps`, `ListDomainsRequest`, `SempreResult`, `MainDate`, `SqrlTest`, `RedirectUri`, `pageOptions`, `UInt128`, `QueryBuilder`, `DataStoreService`, `InteractivityChecker`, `ExtProfile`, `APEv2Parser`, `MetricSourceIntegration`, `OpenAPIV2.Document`, `CampaignItemType`, `AttributeContainer`, `ChannelSigner`, `DefaultEditorAggSelectProps`, `EditorFromTextArea`, `IRouterSlot`, `ProtocolVersionFile`, `SlaverListener`, `TPagedParams`, `ConfigProps`, `Events.postkill`, `MapStateToProps`, `ApplyPendingMaintenanceActionCommandInput`, `CursorEvents`, `Constraints`, `FavoritesState`, `UpdateInfoJSON`, `MockValidatorsInstance`, `PublicPolynomial`, `ActionsConfigurationUtilities`, `SurveyResultMongoRepository`, `ThemeValue`, `UnitWithSymbols`, `ILeague`, `ICardInfo`, `ContributionProvider`, `AssertionResult`, `TimerType`, `TabularCallback`, `TPLTextureHolder`, `GetTagsCommandInput`, `SvgIconConfig`, `MediaDef`, `YAxis`, `PromptResult`, `JSONSchemaSourceData`, `ts.HeritageClause`, `MessageWithReplies`, `vec3`, `ListFunctionsRequest`, `DAL.DEVICE_ID_SERIAL`, `JobResult`, `android.animation.Animator`, `IUpworkClientSecretPair`, `TaskDetails`, `UserFunction`, `WidgetTracker`, `TextDecoration`, `ParsedUrlQueryInput`, `DocusaurusContext`, `SerumMarket`, `Portion`, `SchemaConfig`, `ServiceNameFormatter`, `GetServersCommandInput`, `ListClient`, `RouteShorthandOptions`, `Miscellaneous`, `DragLayerMonitor`, `IndexedXYZCollection`, `Out`, `QueryIdempotencyTokenAutoFillCommandInput`, `StandardTableColumnProps`, `ExceptionIndex`, `Reward`, `ISubWorkflow`, `TSESLint.Scope.Reference`, `ThingProto`, `SignedOperation`, `ILoggerModel`, `CreateElement`, `MagickColor`, `UserSummary`, `FormGroupControls`, `SourceFileContext`, `LngLat`, `IssueAnnotationData`, `Arrayish`, `ITest`, `CallAgentProviderProps`, `GenericBreakpoint`, `SfdxTestGroupNode`, `FooValueObject`, `PagerDutyActionTypeExecutorOptions`, `messages.Attachment`, `GraphQLTypeInformer`, `AggParamEditorProps`, `IStateTypes`, `StatefulLogEvent`, `TextTexture`, `CheckupConfig`, `CheckboxFilter`, `Mesh_t`, `d.OutputTargetAngular`, `PostsContextData`, `google.maps.GeocoderResult`, `RootPackageInfo`, `DateFormatter`, `Encoding.Encoding`, `IReserveApiModel`, `AuthDataService`, `TextEditorConfiguration`, `AttrState`, `OpeningHours`, `BroadcastTxResponse`, `ClientSocketPacket`, `PartialOptions`, `ConformancePatternRule`, `ListConnectionsResponse`, `AccessKeyId`, `ReactionId`, `PgdbDataSource`, `EditMediaDto`, `ActionTypeModel`, `TextBuffer.Point`, `FungibleTokenDetailed`, `ISearchSetup`, `ValidatorConfig`, `ApiCredentials`, `SchemaComposer`, `STRowSource`, `PerformanceEntryList`, `BencheeSuite`, `TransitionConditionalProperties`, `IPropertyPaneConfiguration`, `DataImportRootStore`, `ObjectContent`, `Mars.AddressLike`, `DescribeInstancesCommandInput`, `IMinemeldStatusService`, `ISampler3DTerm`, `DropoutLayerArgs`, `ApplicateOptions`, `DataLayer`, `ExpressionsStart`, `ProgressBarEvent`, `EventMutation`, `Cancelable`, `OpenCLBuffer`, `ExtensionSettings`, `IViewport`, `UInt64`, `ApplicationLoadBalancer`, `MerkleIntervalInclusionProof`, `EntityDTO`, `ContractTransaction`, `CommandConstructorContract`, `Cypress.Actions`, `PlanetData`, `firebase.Promise`, `ParquetCompression`, `CanvasSpaceNumbers`, `LegendOrientation`, `EyeGazeEvent`, `NNode`, `DebugSystem`, `ScaleValue`, `DefaultRenderingPipeline`, `Oas3`, `IScripts`, `RecentDatum`, `_ts.Node`, `ICassClusterModuleState`, `IVar`, `tensorflow.IGraphDef`, `TargetConfiguration`, `YearToDateProgressConfigModel`, `DynamicFormLayoutService`, `ComponentArgTypes`, `ListColumn`, `ColumnDefinitionBuilder`, `WakuMessage`, `Config.GlobalConfig`, `PageHelpers`, `IPSet`, `Triangle`, `AssessmentType`, `DomApi`, `Hasher`, `SpectrumElement`, `BuildAnnotation`, `MatrixUser`, `VideoInfo`, `ASTConverter`, `SentimentAspect`, `FileSystemCommandContext`, `ParserContext`, `StorageObjectAcks`, `HealthPolledAction`, `BaseHub`, `Accord`, `JobStatus`, `UserPreferences`, `MediationState`, `CommonOptions`, `MapImage`, `SharedConfig`, `KibanaExecutionContext`, `CmsContext`, `ScaledSize`, `PropEnhancers`, `NetworkBuilder`, `ChemicalState`, `EngineArgs.ListMigrationDirectoriesInput`, `IMonitorPanelAction`, `EvActivityCallUI`, `NumberLike`, `requests.ListWorkRequestLogsRequest`, `StackHeaderProps`, `PropertyAnimation`, `NetworkConfig`, `GeometryKind`, `IVertoCallOptions`, `StarPieceHostInfo`, `FaunaPaginateOptions`, `Vars`, `React.DragEvent`, `Eth`, `WebhookProps`, `SearchEmbeddableConfig`, `TwingTemplate`, `PolymorphicPropsWithoutRef`, `EntityObject`, `Answers`, `GetResult`, `DriverException`, `CreateUser`, `ToolbarButton`, `SubscriptionField`, `reflect.TypeReference`, `InsightShortId`, `MapDispatchToProps`, `IResultSetRowKey`, `HalfEdgeGraph`, `InstanceClient`, `TestabilityRegistry`, `ObjectNode`, `SwapInfo`, `DeviceConnection`, `TEBinOp`, `ServiceManager`, `ViewMetaData`, `OscillatorNode`, `LayerObjInfoCallback`, `JHistory`, `OperationBatcher`, `FrameGraphicsItem`, `DateIntervalDescriptor`, `ContentChangedCallbackOption`, `BufferLine`, `IColorModeContextProps`, `OverrideOptions`, `ActionToRequestMapper`, `ModelObj`, `RowLevel`, `MiniMap`, `ReadableStreamController`, `MsgAndExtras`, `EngineResults.DevDiagnosticOutput`, `ChainedIterator`, `DAL.DEVICE_ID_GESTURE`, `DryPackage`, `RedisConnectionManager`, `GaussianDropout`, `TheiaDockPanel`, `EthereumEvent`, `StripePaymentMethod`, `TreeChanges`, `IGhcModProvider`, `AMock`, `RouteProp`, `AppStatusChangeFaker`, `CalendarRange`, `MoveData`, `InsertOneResult`, `AlbumListItemType`, `GrowableXYZArray`, `MemoryDump`, `AC`, `TagKey`, `NewSyntax`, `MonitoredHealth`, `NFT1155V3`, `LineChartLineMesh`, `Libraries`, `QueueData`, `EntityKey`, `CommentRange`, `requests.ListDhcpOptionsRequest`, `Pully`, `DataConfig`, `WsViewstateService`, `ElementDefinition`, `Of`, `AutocompleteProvider`, `VaultAdapterMock`, `SavedObjectsServiceStart`, `DataDown`, `PrefFilterRule`, `Knex.Transaction`, `NaviRequest`, `AutoRest`, `binding_grammarVisitor`, `EventPublisher`, `VirtualFilesystem`, `SetupModeProps`, `ArtifactItem`, `StynPlugin`, `MarkConfig`, `ExtendedAppMainProcess`, `Reader`, `MalRequest`, `ApolloServerExpressConfig`, `VSnipContext`, `GetResponse`, `TreemapSeries.ListOfParentsObject`, `MultiPolygon`, `IcalEventsConfig`, `FileEditorSpec`, `ImageSourcePropType`, `OsmObject`, `DeployStageExecutionStep`, `IUiStateSlice`, `ComponentFixture`, `FilterFunction`, `ShortcutEventOutput`, `ZoomOptions`, `VisTypeTimeseriesVisDataRequest`, `TransitionDefinition`, `VisualizationConfig`, `ChannelState`, `S3Control`, `Computed`, `SwitchOptions`, `ImageInspectInfo`, `PhrasesFilter`, `CronService`, `WeaponMaterial`, `git.ICreateBlobParams`, `ArweavePathManifest`, `androidx.appcompat.app.AppCompatActivity`, `DoorLockCCOperationReport`, `SupervisionResult`, `VisualizationsSetupDeps`, `SavedObjectsPublicPlugin`, `SeriesOption`, `Scope`, `TouchGestureEventData`, `HintMetadata`, `TestBed`, `JointConfig`, `TableOffsetMap`, `THREE.Euler`, `GenerateClientOptions`, `HomePageProps`, `TypeName`, `TypeCacheEntry`, `SingleEmitter`, `ReindexState`, `FunctionImportRequestBuilder`, `SavedObjectsImportOptions`, `SwitchAst`, `LoginOptions`, `Resources`, `IDBTransactionMode`, `TensorLike2D`, `RTCRtpParameters`, `ChartParams`, `Types.TooltipCfg`, `UserStakingData`, `TransferType`, `ConstrainDOMString`, `HomeReduerState`, `AlainI18NService`, `EditorCompletionState`, `PlanPreviewPanel`, `IdentityNameValidityError`, `AddMessage`, `RecommendationLevel`, `Partition`, `SubscriberType`, `ItemTable`, `DecoratorDefArg`, `MOscMod`, `AddonEnvironment`, `DispatchWithoutAction`, `DebtKernelContract`, `IController.IParameter`, `DriverModel`, `Matrix22`, `SheetRef`, `URLParse`, `GiphyService`, `SVGLabel`, `StreamModel`, `ExceptionsBuilderExceptionItem`, `WindiPluginUtils`, `TerraNetwork`, `GraphQLFieldMap`, `GetItemOutput`, `ImageViewerProps`, `ProjectInfo`, `ConfigSet`, `FunctionProp`, `WeaponData`, `ForgotPasswordVerifyAccountsRequestMessage`, `UserID`, `Snap`, `Web3ReactContextInterface`, `IFilterListGroup`, `ActionWithError`, `IBalance`, `ListInstancesCommandInput`, `AggConfigOptions`, `BillId`, `GameSettings`, `MediaFileId`, `ICredentialsDb`, `CallArguments`, `StatusFollow`, `sdk.TranslationRecognitionEventArgs`, `FrameType`, `EngineEventType`, `FormWindow`, `loader.Loader`, `CreateConfigurationCommandInput`, `GeoBoundingBoxFilter`, `DataSharingService`, `GameInput`, `Pile`, `FadeSession`, `RawNodeData`, `PresentationRpcRequestOptions`, `InternalHttpServiceSetup`, `PropParam`, `LinkedAccount`, `AxisDimension`, `ILiquorTreeNode`, `IGradGraphs`, `CSSProperties`, `LuaDebugVarInfo`, `ChampList`, `StorageRecord`, `Aser`, `BufferConstructor`, `ProjectMeta`, `DescribeTagsCommand`, `SelectorT`, `EvaluateOperator`, `env`, `HTMLPreElement`, `RegisteredTopNavMenuData`, `OperationQueryParameter`, `MatchDoc`, `ConflictResolution`, `ConverterDiagnostic`, `SoundConfig`, `CompanyType`, `MetadataSelector`, `Dump`, `vscode.TextDocument`, `DatabaseOptions`, `CreateDBClusterParameterGroupCommandInput`, `RequestorBuilder`, `ShoppingCartStore`, `IDependenciesSection`, `PbSecurityPermission`, `IServerSideDatasource`, `WFunction`, `FeedbackId`, `QueryMiddleware`, `DocumentMetadata`, `MaxNormArgs`, `GlobalInstructionData`, `AppletIconStyles`, `SandDance.specs.Insight`, `AuthorModel`, `GetEmailTemplateCommandInput`, `UpdateFlowCommandInput`, `CreateSubnetGroupCommandInput`, `CircuitInfo`, `ENGINE`, `Events.enterviewport`, `CachedImportResults`, `Git`, `PluginDependencies`, `FileSpec`, `TimingSegmentName`, `ISavedObjectsPointInTimeFinder`, `LoaderBundleOutput`, `ScreenTestViewport`, `AdvancedFilter`, `DeleteChannelMessageCommandInput`, `IBox`, `requests.ListComputeGlobalImageCapabilitySchemaVersionsRequest`, `TDiscord.GuildMember`, `ForgeModAnnotationData`, `ChildAttributesItem`, `DBUser`, `RefreshAccessTokenAccountsValidationResult`, `BreadCrumb`, `URL`, `ISequencedDocumentMessage`, `Eventual`, `GraphQLHandler`, `DeleteDatasetRequest`, `Curve`, `IPackage`, `InterfaceTemplate`, `OptionKind`, `NeverShape`, `KeyResultUpdateService`, `IMineMeldAPIService`, `MockProject`, `FileHandlerAPIs`, `HsUtilsService`, `TestRouter`, `MediaTrackSupportedConstraints`, `MLKitRecognizeTextResult`, `AxisBuilder`, `TState`, `GreetingStruct`, `EdmTypeShared`, `TreeFile`, `Theme`, `Variants`, `SpriteFont`, `IndigoOptions`, `IWriteOptions`, `AppStateSelectedCells`, `TRPGAction`, `ContainerContext`, `ColorValue`, `NinjaPriceInfo`, `AgentConnection`, `WordCharacterClassifier`, `Decider`, `VoidFunctionComponent`, `enet.IDecodePackage`, `BlockReference`, `S.Stream`, `RouteRule`, `IButtonStyles`, `SessionUserAgent`, `InheritedChildInput`, `mmLooseObject`, `ModelEvaluateDatasetArgs`, `CtrBroad`, `SignedMessage`, `MessageServiceInterface`, `INodeContainerInfo`, `MockFluidDataStoreRuntime`, `ComboFilterSettings`, `NumericOperand`, `Sql`, `MigrationSubject`, `CompositeTreeNode`, `FlexElementProps`, `PDFRadioGroup`, `MetricName`, `DraggableEvent`, `TypographyDefinition`, `InspectReport`, `RetryAction`, `HashSet`, `WebpackRule`, `ChakraComponent`, `LiteralMap`, `RequestPolicyFactory`, `ReadAddrFn`, `SendDataMessage`, `IMarkdownDocument`, `Batcher`, `IObject3d`, `AttendanceDay`, `IExpression`, `NockDefinition`, `MenuItem`, `ts.ModuleResolutionHost`, `ContactInterface`, `SiteSourceControl`, `ITestPlan`, `app.FileService`, `ICellStructure`, `IHasher`, `GraphQLParams`, `Buildkite`, `IsNumber`, `ISampleSizeBox`, `lspCommon.WorkspaceType`, `FsTreeNode`, `Generic`, `globalThis.MouseEvent`, `DepositKeyInterface`, `DescribeExecutionCommandInput`, `sinon.SinonStubbedInstance`, `DescribeSubnetGroupsCommandInput`, `ApiJob`, `SubscribeEvents`, `IDockerComposeResult`, `PostMessageStub`, `MultilevelSwitchCCReport`, `LogLevel`, `IHydrator`, `MEPChromosome`, `RegisteredClient`, `ContentRepository`, `MacroBuffer`, `ResourcePage`, `apid.ProgramId`, `PrimitiveArg`, `CIMap`, `ViewRect`, `UInt128Array`, `AuthenticationService`, `OpenSearchQueryConfig`, `$FixMe`, `ProcessAccountsFunc`, `HDKeychain`, `EventsTableRowItem`, `Referral`, `VisorSubscription`, `MapPool`, `ITableProps`, `SeriesSpecs`, `ShadowGenerator`, `MapperService`, `SessionCsrfService`, `DateObject`, `EmailConfig`, `ExtraDataTypeManager`, `PartyMatchmakerAdd_StringPropertiesEntry`, `AccountBase`, `NodeParserOption`, `CC`, `ImageResolvedAssetSource`, `LiveAnnouncerDefaultOptions`, `MathfieldPrivate`, `DQLSyntaxErrorExpected`, `StacksNetwork`, `ObjectFetcher`, `Panner`, `Pkg`, `ErrorBarSelector`, `ExecutionResult`, `MockBaseService`, `BMD`, `d.OptimizeJsInput`, `OrderType`, `CustomAnimateProps`, `ScopeContext`, `Compatible`, `DevServerEditor`, `ScopeQuickPickItem`, `ViewFunctionConfig`, `StopExperimentCommandInput`, `GradientObject`, `RegisterDr`, `Distance`, `DisplayableState`, `TodoController`, `TaskChecklistItem`, `InternalKeyComparator`, `JumpFloodOutput`, `VarianceScalingArgs`, `ISizeCalculationResult`, `StringLiteral`, `CopyDirection`, `ImageInfo`, `tmp.DirResult`, `UnionC`, `APropInterface`, `OrganizationContext`, `FieldContextValue`, `StackHeaderInterpolatedStyle`, `UserDataContextAPI`, `ListTagsCommand`, `ComplexArray`, `BucketHandler`, `ScrollState`, `IQueryParamsConfig`, `GitHubActionWorkflowRequestContent`, `t.Node`, `DataStreamInfo`, `Switch`, `IDerivation`, `DateRangeMatch`, `ChildSchoolRelation`, `P5`, `IndexStats`, `FileStats`, `SymbolDefinition`, `CellValueChangedEvent`, `UpdateUserInput`, `RenderSchedule.ScriptProps`, `MochaOptions`, `IApolloContext`, `StyleRecord`, `SelectableListService`, `supertest.SuperTest`, `BinaryHeap`, `InterfaceWithConstructSignatureReturn`, `VideoLayer`, `IEcsDockerImage`, `BarEntry`, `Cloud`, `ReduxState`, `IValueChanged`, `NormalizedDiagnostic`, `TBook`, `SceneTreeTimer`, `ResponseMeta`, `ElUploadRequest`, `builders`, `DateTimeFormat`, `PartitionedFilters`, `ListUsersCommand`, `ErrorService`, `Disposable`, `DescribeDatasetImportJobCommandInput`, `CustomMapCache`, `SubstrateEvent`, `StreamFrame`, `TransitionPreset`, `ReadOnlyFunctionOptions`, `DirectThreadEntity`, `Mocha.Context`, `SFProps`, `ViewPropertyConfig`, `prettier.Options`, `TPath`, `UIEventSource`, `vscode.TextLine`, `SettingsProvider`, `DeviceTypeJson`, `ContactSubscriptions`, `DataObject`, `ListAlarmsRequest`, `CodepointType`, `ContextWithMedia`, `GraphQLEntityFields`, `TTag`, `FlexLine`, `GraphQLResolverContext`, `SitesFixesParserOptions`, `DecoratorNode`, `ts.UserPreferences`, `WrapExportedClass`, `CompleteResult`, `execa.ExecaChildProcess`, `ISuggestValue`, `PersistedSnapshot`, `WebDNNWebGLContext`, `TypeDef`, `CollisionObject2DSW`, `TransactionPayload`, `vscode.EndOfLine`, `SourceCodeInfo_Location`, `CliInfo`, `requests.ListWafLogsRequest`, `ICoverageFragment`, `ApiInterfaceRx`, `PersistAppState`, `AstNode`, `PLI`, `requests.ListCloudVmClusterUpdateHistoryEntriesRequest`, `InstancePrincipalsAuthenticationDetailsProvider`, `WorkRequestStatus`, `TreeEdge`, `VariableUiElement`, `BlockElement`, `ColumnProps`, `AppleTV`, `WXML.TapEvent`, `ListDomainNamesCommandInput`, `ITagProps`, `RTCIceTransport`, `Ad`, `ArticleStateTree`, `ClusterEvent`, `WorldgenRegistry`, `SyntaxDefinition`, `AnchorBank`, `GradleVersionInfo`, `TsSelectComponent`, `Victor`, `EmailTempState`, `IExtentChunk`, `ITrackEntry`, `ShaderVariable`, `WorkerProxy`, `PriceState`, `TranslatedValueID`, `IFileDescription`, `ImGui.Vec2`, `PythonPreviewConfiguration`, `d.CompilerBuildResults`, `EventDelegator`, `InstanceLocator`, `DatasourcePublicAPI`, `InferableComponentEnhancerWithProps`, `CreateDatabaseCommandInput`, `VoiceConnection`, `WrapOptions`, `IVoicemail`, `IRNGNormal`, `s.Field`, `PersonAssignmentData`, `AP`, `LazyMedia`, `LintResult`, `Bus`, `ConsolidateArgs`, `PlasmicConfig`, `CausalRepoIndex`, `NavigationIndicator`, `ClassMetadata`, `IAppStrings`, `DeleteDestinationCommandInput`, `SchemeRegistrarWrapper`, `PathItemObject`, `RecipientOrGroup`, `poller.IPollConfig`, `GherkinQuery`, `Multer`, `AddonClass`, `MaterialAccentColor`, `UrbitVisorConsumerExtension`, `Account`, `ComponentDecorator`, `BabelOptions`, `AuxUser`, `TabStyle`, `DeleteInstanceProfileCommandInput`, `BudgetResult`, `EmailConfirmationsStore`, `ScriptData`, `AccessTokenProvider`, `TinyQueue`, `VgApiService`, `DictionaryEntryNode`, `BlockNode`, `RTCIceGatherer`, `TldrawApp`, `Banner`, `MapBounds`, `CheckpointNode`, `Integer`, `Interfaces.IBroker`, `ConfigurationCCBulkSet`, `StyledOtherComponent`, `Others`, `RoomItem`, `MethodAst`, `DefaultTreeDocument`, `ContextMenuExampleProps`, `ChannelInflator`, `ComponentStrings`, `EndpointDefinition`, `UITabBarController`, `FullIndex`, `Preset`, `IAttr`, `CommonMiddleware`, `ApiMethod`, `NearSwapTransaction`, `InitParams`, `OpenSearchDashboardsDatatable`, `ZonedDateTime`, `IndexPatternRef`, `IDocumentFragment`, `GlobalEventName`, `PromptProps`, `CreateChannelBanCommandInput`, `ActorArgs`, `IResultSetElementKey`, `PredictionContext`, `GridBase`, `EffectFunction`, `GroupMembershipEntity`, `UtilObject`, `https.AgentOptions`, `SDK`, `ObjectDetails`, `_ISelection`, `Config.IConfigProvider`, `IConfigurationExtend`, `Widget.ChildMessage`, `IToastOptions`, `UserResult`, `TestingSystem`, `IBaseTabState`, `LoginForm`, `DebugStateAxes`, `EdmxEntityTypeV4`, `UInt256`, `SemanticRole`, `MessageDataFilled`, `ethOptionWithStat`, `Event1EventFilter`, `vscode.CompletionList`, `CommentState`, `ItemEntity`, `ResponseWrapper`, `MenuSurfaceBase`, `PIXI.Graphics`, `InMemoryLiveQueryStore`, `TLockfileObject`, `DeviceChangeObserver`, `JsSignatureProvider`, `ExcludedEdges`, `ListKeyVersionsRequest`, `StringLiteralExpr`, `Link`, `Events.pointercancel`, `MDCBottomSheetController`, `identity.IdentityClient`, `MemberDoc`, `FaktoryControl`, `HeadClient`, `IConversation`, `DiffedURIs`, `PublishOptions`, `theia.SemanticTokensLegend`, `AppServiceBot`, `ConnectedPeer`, `ChangeLanguage`, `LocalActions`, `V3SubgraphPool`, `IConnectionExecutionContextInfo`, `YAMLSchemaService`, `IInvoiceUpdateInput`, `CreateCard`, `PackagePolicyVars`, `IEpochOverview`, `AuxVM`, `IndexedPolyface`, `OutlineSharedMetricsPublisher`, `FurMulti`, `PrivateEndpoint`, `CollateralRequirement`, `IAppServiceWizardContext`, `SqliteValue`, `UnaryContext`, `TSExpr`, `Tags`, `MessagingDevicesResponse`, `Intermediate`, `ExtensionData`, `CodeGenerator`, `ImGui.DrawList`, `MyObject`, `RouterOutlet`, `requests.ListRouteTablesRequest`, `GenesisBlock`, `Accountability`, `FocusPath`, `BalanceActivityCallback`, `ChatMessageType`, `ErrorReporter`, `BigFloat32`, `SerializeCssOptions`, `RenderElement`, `VirtualContestProblem`, `SoftVis3dMesh`, `NumericType`, `MapState`, `GetGroupResponse`, `KernelConfig`, `InternalServerException`, `PackageService`, `Semaphore`, `VersionId`, `ListReleaseLabelsCommandInput`, `RowSchema`, `MapView`, `LanguageServiceHost`, `FakeHttpProvider`, `IDBCursorWithValue`, `TRWorld`, `ko.Subscription`, `CommitStatus`, `PageBlobClient`, `ArgType`, `ErrorReport`, `ThemeConfiguration`, `Issuer`, `IAngularScope`, `SceneDesc`, `StreamingStatus`, `ControlPoint`, `LLink`, `App.IPolicy`, `AlbumService`, `MXCurve`, `HttpMethod`, `RowProps`, `TradeExchangeMessage`, `HorizontalAlignment`, `JsonRpcId`, `GetRepository`, `KeyIndex`, `ElkLabel`, `GlyphCacheEntry`, `AlertOptions`, `FavoriteGroup`, `MonzoService`, `ConfigurationService`, `SQLeetEngine`, `QueryNodePath`, `VatLayout`, `ListTagsForResourceInput`, `InMemoryConfig`, `ValidateRuleOptions`, `BaseFactory`, `IRootReducer`, `EmployeeViewModel`, `WorkspaceProject`, `SFCScriptBlock`, `ReducerMap`, `IUIMethodParam`, `WrappedProperties`, `Chlorinator`, `IBuilder`, `DecoderFunction`, `CovidData`, `MaterialMap`, `Mocha.Test`, `AnyField`, `HookBase`, `DemoSettings`, `android.support.v7.widget.RecyclerView`, `NgbModal`, `SharedElementSceneData`, `NavigatorState`, `SnackbarContextInterface`, `Directus`, `IDifferences2`, `Op`, `TestTemplate`, `IContainer`, `WindowRect`, `AudioNode`, `JPABaseEmitter`, `Proof`, `UserManager`, `IWrappedEntity`, `UserMusicDifficultyStatus`, `MagickFormat`, `ZIlPayCore`, `VarUsages`, `ConnectedComponentClass`, `PanelsState`, `TheMovieDb`, `HttpErrorHandler`, `ComboFilter`, `TransactionsState`, `SlatePlugin`, `SerializableConstructor`, `StateProps`, `AxisProperties`, `__HttpResponse`, `IServiceConstructor`, `d.ComponentCompilerEvent`, `LocalStorageArea`, `DeploymentOptions`, `AppiumDriver`, `WordcloudViewModel`, `HarmajaOutput`, `OriginAccessIdentity`, `DeclarativeEnvironment`, `NumberListProps`, `SubtitlesCardBases`, `ItemStat`, `ExtraButtonComponent`, `SnackbarMessage`, `MaybeLazy`, `DynamoDB.UpdateItemInput`, `MediatorFactory`, `EmailVerificationToken`, `YallistNode`, `StaticCollider`, `RadarPoint`, `sdk.PullAudioInputStream`, `ImagePickerControllerDelegate`, `DeleteButtonProps`, `CloudWatch`, `SimpleExpression`, `Discord.Channel`, `Rank`, `NumberInputOptionProps`, `IDiagnosticsResults`, `FunctionAppContext`, `Zipper`, `UpdatableChannelDataStore`, `ILeaseState`, `IWaterfallTransaction`, `IExpectedArtifact`, `IExtentModel`, `lf.Predicate`, `ApiKey`, `ResponseInit`, `FlowBranchLabel`, `BundleModule`, `ListInstanceProfilesCommandInput`, `DefinitionFilter`, `THREE.WebGLRenderer`, `AgentQq`, `IRootPosition`, `ThemeOption`, `ExprDependencies`, `NamedDeclaration`, `CloudflareApi`, `FileRepositoryService`, `Pooling3DLayerArgs`, `BYOCLayer`, `CodeWriter`, `InternalUnitRuntimeContext`, `DeepImmutableObject`, `Resilience`, `KeyedSelectorFn`, `NavigationViewModel`, `ArchiveEntry`, `ListViewWrapper`, `SymbolSize`, `BezierCurve`, `CreateNetworkProfileCommandInput`, `anyNotSymbol`, `PowerShellScriptGenerator`, `DebugProtocol.Event`, `MessageMock`, `Decimal`, `ParameterDeclaration`, `SchemaMatchType`, `SavedObjectsIncrementCounterOptions`, `UUIDMetadataObject`, `ScriptThread`, `RegulationHistoryItem`, `DropInPresetBuilder`, `ValidateErrorEntity`, `ControlPanelState`, `Relationship`, `TaskContext`, `AxeResultsList`, `QueryEngineEvent`, `StartOptions`, `SvelteSnapshotManager`, `CharacterSetECI`, `TestClock`, `DeletePolicyVersionCommandInput`, `FileWithMetadata`, `TextTip`, `RBNFInst`, `AnimationKey`, `ComplexExpression`, `ValidationHandler`, `AthleteUpdateModel`, `RankingItem`, `ListProjectsCommand`, `MiddlewareCreator`, `CubeArea`, `StatusBar`, `SceneActuatorConfigurationCCGet`, `JPAResource`, `StyleSanitizeFn`, `ast.WhileNode`, `CaseBlock`, `PluginInitializerContext`, `SimulatedTransactionResponse`, `BuildMatch`, `StaffDetails`, `P2PMessagePacketBufferData`, `QuotaSettings`, `StrongExpectation`, `ParticipantsRemovedEvent`, `IndieDelegate`, `SavedObjectsClosePointInTimeOptions`, `d.CompilerSystem`, `ISdkBitrateFrame`, `FreeBalanceClass`, `DocProps`, `CardConfig`, `CreateRawTxOut`, `ElementPaint`, `DescribeImagesRequest`, `DomNode`, `TokenTransferPayload`, `d.ComponentCompilerStaticProperty`, `Knex`, `PDFFont`, `ArithmeticInput`, `Debugger`, `ReactEditor`, `EmberAnalysis`, `ShorthandRequestMatcher`, `PaginateResult`, `CommitOptions`, `HintManager`, `JestAssertionError`, `ParameterJoint`, `ObjectWithId`, `TabProps`, `FunctionTypeResult`, `LightBound`, `InterfaceWithEnumFromModule`, `IterationUse`, `EChartsOption`, `UnionOrIntersectionTypeNode`, `ExpressionAttributes`, `YamlMappingItem`, `CreateApiKeyCommandInput`, `LinesGeometry`, `OrgEntityPolicyOperations`, `UserEnvelope`, `MempoolTransaction`, `NgModuleType`, `requests.ListProtocolsRequest`, `HTMLIonModalElement`, `ReturnType`, `BeneficiaryDTO`, `FeatureFlagType`, `DependenceGroup`, `ZipOptions`, `ValidationFunction`, `SearchInput`, `AcceptCallbacks`, `d.Screenshot`, `ILineIndexWalker`, `StateObservable`, `QuerySnapshotCallback`, `ComponentPortal`, `LRUCache`, `CommentThread`, `WindowSize`, `CompressedJSON`, `IntegrationTypes`, `PlayerLadder`, `MessageExecutor`, `Robot`, `DlpServiceClient`, `ZoneManager`, `Red`, `Events.pointerenter`, `BlockFormatter`, `ErrnoException`, `ProfileProviderResponse`, `IndexedTrendResult`, `WalletDeploymentService`, `NonFungibleTokenAPI.Options`, `IJetURLChunk`, `NgIterable`, `RenderTarget_t`, `CircleBullet`, `InstructionWithTextAndHeader`, `ClassificationType`, `Buckets`, `ConflictException`, `DialogueTest`, `DataTable`, `Binary3x3x3Components`, `VdmEnumType`, `ElementAst`, `PartyJoinRequestList`, `TranslationFormat`, `SyncStatus`, `ICreateOptions`, `AddTagsToResourceMessage`, `cc.Event.EventKeyboard`, `GetRepositoryCommandInput`, `ZeroXPlaceTradeParams`, `InterfaceAlias`, `AliasName`, `StateNavigator`, `ValveState`, `PageSourceType`, `OAuthExtension`, `CartPage`, `CameraType`, `ButtonLabelIconProps`, `CalculationScenario`, `SFAMaterialBuilder`, `TokenStat`, `Pool.Options`, `RtcpPayloadSpecificFeedback`, `DisplayObject`, `ConfigurationPropertyDict`, `IResourceAnalysisResult`, `TooltipService`, `TItemsListWithActionsItem`, `LovelaceCardConfig`, `CallerIdentity`, `IOptimizelyAlphaBetaTest`, `ArrayLike`, `VFSEntry`, `ICompileService`, `ISubscriberJwt`, `AutoFilter`, `StampinoTemplate`, `SyncMember`, `ApprovalPolicyService`, `SmartHomeApp`, `ListFindingsRequest`, `Ink`, `IPayment`, `mendix.lib.MxObject`, `FormatterParam`, `JobTypes`, `ResolvedOptions`, `AccountSetOpts`, `VoiceChannel`, `SnapshotNode`, `AndroidConfig.Manifest.AndroidManifest`, `DefaultDataServiceConfig`, `WatchBranchEvent`, `PathOrFileDescriptor`, `CopySink`, `RuleMetadata`, `ReconciliationPath`, `AttributeDatabindingExpression`, `ApifySettings`, `SliderGLRenderer`, `LoggerOutput`, `CalcValue`, `DeploymentCenterFormData`, `GlobalTag`, `TemplateStringsArray`, `FoodRelation`, `SavedObjectDescriptor`, `FavoritePropertiesOrderInfo`, `PlanStep`, `Lead`, `ArrayRange`, `ApplicationEntry`, `VaultActive`, `IndexBuffer3D`, `ModelLayer`, `SpatialViewDefinitionProps`, `SourceLoc`, `EditorDescription`, `SessionsActions`, `WebLayer3DBase`, `TypedReflection`, `MultiKeyComparator`, `IAuthState`, `SchemaOverview`, `BaseService`, `EventsMessage`, `PiEditPropertyProjection`, `RequestContract`, `PendingAction`, `DelegatorReward`, `NoteService`, `ParameterOptions`, `ArrayBufferWalker`, `HeapInfo`, `NzMessageRef`, `AccountData`, `ValueSource`, `Serializer`, `ParticipantsJoinedListener`, `DynamicEntry`, `AirGapWallet`, `MangolLayer`, `IStaggerConfig`, `ISeed`, `HostCancellationToken`, `UrlGeneratorsStart`, `TileCoordinates`, `DeferredPromise`, `Client.ProposalResponse`, `ILoggerInstance`, `RootActionType`, `CoreUsageStats`, `GfxRenderPassDescriptor`, `IDomainEntry`, `GroupArraySort`, `Gatekeeper`, `ObjectList`, `TurndownService`, `FileDescription`, `ITreeData`, `VisContainerProps`, `BlogActions`, `ValidationOptions`, `ServiceIdentifier`, `CardsWrapper`, `PolyfaceVisitor`, `Conjugate`, `ContractWhiteList`, `Class`, `ProcessorInternal`, `IDynamicGrammarGeneric`, `SanityChecks`, `SpeakDelegate`, `IScore`, `FilesChange`, `PostsState`, `UsePaginatedQueryOptions`, `EVM`, `core.VirtualNetworkClient`, `ConstantArgs`, `ContactModel`, `LABEL_VISIBILITY`, `WetPlaceholder`, `IParseAttribute`, `FindByIdOptions`, `DeleteDistributionCommandInput`, `MBusTransaction`, `TableFormDateType`, `HammerLoader`, `Schema$Sheet`, `YawPitchRollAngles`, `SlashCommand`, `Express.NextFunction`, `TypeError`, `CrossConnectMapping`, `ProviderRpcError`, `T_0`, `JsonPath`, `RushConfiguration`, `GenericEvent`, `FiberNode`, `zmq.Pair`, `BlockchainTimeModel`, `TrackEventParams`, `GlobalSearchProviderResult`, `Eyeglasses`, `ColorPreviewProps`, `Fish`, `DiagnosticLevel`, `AssignedContentType`, `RowVM`, `CategoricalParameterRange`, `MethodDescriptorProto`, `HumidityControlMode`, `DefinitionRange`, `MagickReadSettings`, `GoalStatus`, `MaterialUiPickersDate`, `FB3ReaderPage.ReaderPage`, `AccessibilityOptions`, `EsErrors.ElasticsearchClientError`, `NextAuthOptions`, `MacroActionId`, `MailStatusDto`, `MdcSlider`, `Node_Enum`, `TestActionContext`, `TokenBucket`, `SkyBoxMaterial`, `ListTournamentRecordsAroundOwnerRequest`, `vscode.CompletionItemKind`, `CdkScrollable`, `DNSLabelCoder`, `SGDOptimizer`, `StringSet`, `ErrorEmbeddable`, `ObjectAssertionMember`, `IcuExpression`, `nsISupports`, `AutoCompleteEventData`, `ModalSize`, `NamedIdentityType`, `interfaces.Bind`, `ColumnDefinitions`, `EventNote`, `GameDataInterface`, `XMLElementOrXMLNode`, `ICloneableRepositoryListItem`, `CommandLine`, `SubscribeCommandInput`, `Polynomial`, `MultisigData`, `InputArgs`, `RequestTemplateReference`, `DurationInput`, `MangoClient`, `CameraRig`, `BooleanExpression`, `MarkdownTable`, `TemplateData`, `EnrichmentPipeline`, `DynamicCommandLineAction`, `ServiceDefinition`, `SwitchKeymapAction`, `CustomerContact`, `VisualEditor`, `EthAddress`, `AlertType`, `BindingAddress`, `HookFn`, `LinkProof`, `EventDestination`, `ITransformResult`, `RenderColorTexture`, `IBoxPlot`, `ColumnProp`, `AirlineService`, `FunctionField`, `SHA512`, `LevelUpChain`, `ButtonText`, `CurrencyMegaResult`, `ts.Decorator`, `WTCGLRenderingContext`, `SxSymbol`, `DelNode`, `SinonSpyCall`, `ActionType`, `ConfigurationManager`, `HSLVector`, `DiscordBot`, `CanvasKit`, `EditDashboardPage`, `SettingsValue`, `FreePoint`, `ManagementDashboardTileDetails`, `BackendError`, `BridgeConfig`, `TimeSeriesMetricDefinition`, `IVector4`, `HandshakeType`, `IVisualHost`, `FormInput`, `StackUtils`, `TypeReferenceSerializationKind`, `IHistorySettings`, `PlasmicLock`, `android.content.Context`, `FoodItem`, `IWorkflowBase`, `Escrow`, `Objects`, `ReleaseProps`, `vscUri.URI`, `Some`, `RootAction`, `ModalNavigationService`, `IAllTokenData`, `RenderTexture`, `FirebaseHostingSite`, `SharedAppState`, `GuildChannelResolvable`, `LiteralExpression`, `VantagePointInfo`, `IModelAnimation`, `PostCSSNode`, `UpdateRequestBuilder`, `GoThemeBackgroundCSS`, `ContextMenuDirection`, `JobRunSummary`, `ArgumentsType`, `StellarCreateTransactionOptions`, `TooltipModel`, `ReshapeLayerArgs`, `FindOneOptions`, `FlowTransform`, `FlexibleAlgSource`, `SignalState`, `GL2Facade`, `UniqueObject`, `AlertConfig`, `ListPager`, `WebhookSettings`, `IBaseRequestAction`, `IFileRange`, `FilterValue`, `chrome.contextMenus.OnClickData`, `SupportContact`, `IDrawData`, `IPeerLogic`, `ElementPropsWithElementRefAndRenderer`, `LineWidth`, `ConfigurationGroup`, `ExtraValues`, `GetRepositoryStatisticsPayload`, `CompositeMapper`, `IMapPin`, `SFieldProperties`, `CLICommand`, `SignCallback`, `IMesh`, `AltStore`, `MorphTarget`, `RequestInterceptor`, `IntType`, `MenuStateModel`, `JSONFormatter`, `OrmService`, `SessionConfiguration`, `CameraFrameListener`, `HALEndpointService`, `ValueMetadataDuration`, `RestoreWalletHandler`, `AuthAndExchangeTokens`, `GlyphData`, `TransitionController`, `HouseCard`, `RegionHash`, `AssetPropertyValue`, `PostProps`, `MarkupKind`, `UNK`, `BillDate`, `InstrumentName`, `EditStatus`, `QueryResultRow`, `DynamoDBStreamEvent`, `ButtonType.StyleAttributes`, `ICircuitState`, `SpekeKeyProvider`, `PlaceAnchor`, `FadingParameters`, `TokensList`, `GossipTimestampFilterMessage`, `mpapp.IPageProps`, `ClipPlane`, `NineZoneNestedStagePanelsManager`, `Principal`, `Highcharts.NetworkgraphLayout`, `VectorTransform`, `PackagerInfo`, `LabelDefinition`, `MDL0Model`, `SchemaCxt`, `BoardBuilder`, `SourceFileSystem`, `OutputProps`, `CustomerService`, `LanguageEntry`, `Push`, `DateParser`, `ViewModelQuery`, `yargs.CommandModule`, `RPCProtocol`, `Reverb`, `XroadIdentifier`, `PropertyDefinition`, `TickResultEnum`, `DetectionMetrics`, `Project.Root`, `GlobalConstraintRegistrarWrapper`, `SymbolIndex`, `DescribeRepositoriesCommandInput`, `React.ComponentType`, `Thunk`, `cloudwatch.MetricChange`, `RelationComparisonResult`, `AnnotationShape`, `CommandBuildElements`, `PubkeyResult`, `DescribeDBClusterParametersCommandInput`, `Json.Property`, `DataCardEffectPersonType`, `ActivityType`, `App.storage.ICachedSettings`, `thrift.TProtocol`, `MaterialAlertDialogBuilder`, `EntityCacheReducerFactory`, `ClusterContextNode`, `Types.Id`, `BeforeCaseContext`, `LabelOptions`, `StopTransformsRequestSchema`, `TypeOrUndefined`, `WebGLResourceHandle`, `ImportSpecifierArray`, `SocketMessages.produceNum`, `Id64Arg`, `SCSSParser`, `OperationResponse`, `FormlyTemplateOptions`, `ISuiteResult`, `PDFDropdown`, `IOAuthTokenResponse`, `PipelineRuntimeContext`, `IOSSplashResourceConfig`, `Decorator`, `DictionaryModel`, `ImportedRecord`, `ChangeVisitor`, `TransposeAttrs`, `GetAccessorDeclaration`, `... 15 more ...`, `SignatureHelpContext`, `AttandanceDetail`, `DataTypeResolver`, `LoadConfigResults`, `ViewManager`, `AsyncSubject`, `LexPosition`, `requests.ListSessionsRequest`, `Annotation`, `MeshComponent`, `ExportDeclaration`, `BinarySensorCCGet`, `FederationClient`, `CodeEdit`, `EmptyActionCreator`, `FirestorePluginOptions`, `TypeConfig`, `estypes.ErrorCause`, `PropTypesMapping`, `GetOpts`, `ActivityTypes`, `TextState`, `HandlerAction`, `UpdateChannelRequest`, `ChannelModel`, `UndoStack`, `HotkeyConfig`, `AppRegistryInfo`, `SubscriptionService`, `OptionGroup`, `BatteryStateEntity`, `Web`, `MediaTags`, `EstimateGasValidator`, `VueTag`, `IGLTFLoaderExtension`, `IWarningCollector`, `PathBuilder`, `IRemix`, `ProgressionAtDayRow`, `RollupCache`, `PriceScale`, `TableModelInterface`, `EnhancedItem`, `SignedTokenTransferOptions`, `CellGroup`, `LabelChanges`, `ProjectedXY`, `Identifiers`, `NBTPrototype`, `HomePluginStartDependencies`, `AstPath`, `TextSelection`, `ImGui.Style`, `SpendingConditionOpts`, `SQLNode`, `ScraperArgs`, `cytoscape.CollectionElements`, `ListAppInstanceUsersCommandInput`, `CompletionItem`, `Mounter`, `GetResourcesCommandInput`, `PainlessCompletionResult`, `ScheduleItem`, `IOrderResult`, `CreateServerCommandInput`, `HealthType`, `DetailViewData`, `ListSecretsRequest`, `MutationFunc`, `RedisAdapter`, `MutationTypes`, `IExecuteFunctions`, `TsxComponent`, `AlphaTest`, `CompareMessage`, `ContainerRepository`, `NefFile`, `BracketTrait`, `VideoPreferences`, `HtmlOptions`, `ProductInformation`, `DeleteMemberCommandInput`, `RemoveOptions`, `TimelineBuckets`, `ModernServerContext`, `DeviceDescriptor`, `BeneficiaryApplication`, `ObstaclePortEntrance`, `EvaluatorUsage`, `MoveType`, `IColorHierarchy`, `NatGateway`, `MongoCron`, `CharData`, `Profiler`, `CertificateResponse`, `FieldMetadata`, `TestHookArgs`, `requests.CreateConnectionRequest`, `GeistUIThemes`, `CommandValues`, `MutableList`, `BaseUAObject`, `TResponse`, `SortField`, `AnimatedValue`, `CompressionTextureTypeEnum`, `ApiPipeline`, `CardRenderDynamicVictoryPoints`, `FloatValue`, `InterpolateExpr`, `AzureWizardPromptStep`, `PersonStatusType`, `Queryable`, `AdmZip`, `EventActionHandlerActionCallableResponse`, `SubqueryProject`, `InjectionKey`, `ValueTypeOfParameter`, `CompletionTriggerKind`, `AnyPatternProperty`, `PaletteThemeConfig`, `DraggableList`, `ParsedProperty`, `CharLevelState`, `Events.precollision`, `EntitySchemaService`, `FileEntry`, `CompiledCard`, `TelemetrySavedObject`, `Nexus`, `ENABLED_STATUS`, `CursorBuffer`, `ScreenEventType`, `ExpenseService`, `CellClassParams`, `DeSerializers`, `StoreChangeEvent`, `TorusStorageLayerAPIParams`, `RelatedRecords`, `BaseAxisProps`, `CategoryDataStub`, `Defunder`, `WorkspaceFolder`, `FrameworkEnum`, `requests.ListKeyStoresRequest`, `StateManagerImpl`, `RegularStep`, `LuaState`, `CrochetCommand`, `AnyRegion`, `ListSnapshotBlocksCommandInput`, `MerchantGameActivityEntity`, `SettingsModel`, `FuzzyScore`, `LayoutDto`, `IArtist`, `BufferLines`, `PartytownWebWorker`, `MyAudioContext`, `ApiConfig`, `GraphPath`, `SourceRenderContext`, `FabricGatewayRegistry`, `JsxSpreadAttribute`, `IUpworkApiConfig`, `Mutator`, `HdPublicNode`, `IOfflineData`, `TemplateFile`, `IWrappedExecutionContext`, `JointOptions`, `RookCephInputs`, `CfnParameter`, `SharePluginStart`, `SyncArgs`, `RSTPreview`, `StackScreenProps`, `VpcConfiguration`, `RadioProps`, `ProcessListener`, `TransportResponse`, `ITableData`, `ReStruct`, `GridReadyEvent`, `ResourceChange`, `SaveFileWriter`, `TestNode`, `PluginsSetup`, `TexFunc`, `ViewFilesLayout`, `DefaultAzureCredential`, `ItemDataService`, `TimefilterSetup`, `ICommandOptionDefinition`, `DatModelItem`, `d.SourceMap`, `LocalizedLabels`, `ParsedMessagePartICUMessageRef`, `CurriedGetDefaultMiddleware`, `TweetMedia`, `FooBar`, `IListener`, `UnionOptions`, `DAL.DEVICE_ID_SYSTEM_TIMER`, `InputHTMLAttributes`, `BarRectangleItem`, `ExtractedAttr`, `SankeyPoint`, `EllipseEditUpdate`, `TidalExpression`, `IProjectData`, `DefaultDataServiceFactory`, `CampaignsModelExt`, `ConceptInstance`, `DerivedKeys`, `Transformer`, `ServiceGetPropertiesResponse`, `DashboardConfig`, `ResourceActionMap`, `CreateRoomRequest`, `SimplifiedType`, `QuantumMove`, `Container`, `PortMapping`, `AnnotatedError`, `SqlTuningTaskCredentialTypes`, `core.ApiRequest`, `ParticipantsLeftListener`, `ProjectStorage`, `EngineResults.ListMigrationDirectoriesOutput`, `LayerConfig`, `ExtractCSTWithSTN`, `vscode.Memento`, `WebClient`, `PrimitiveModeEnum`, `TerminalOptions`, `IToolchian`, `CONTENT`, `ExceptionBreakpoint`, `ErrorsByEcoSystem`, `ListAppsCommandInput`, `DataSetupDependencies`, `ExpBoolSymbol`, `React.FunctionComponent`, `LayoutManager`, `DescribeVpcPeeringConnectionsCommandInput`, `VueConstructor`, `TImageType`, `requests.ListOceInstancesRequest`, `ListJobsCommand`, `ResultInterface`, `DependencyOptions`, `BoundingBox`, `IndexPatternDeps`, `DocViewInput`, `Convolver`, `GeometricElement`, `PageQueryOptions`, `IParam`, `Suggest`, `GfxDeviceLimits`, `AuthenticateDeviceRequest`, `RpcConnectionWriter`, `AwrDbCpuUsageSummary`, `ItemPriceRate`, `BigInt`, `AcctStoreDict`, `bitcoin.Psbt`, `Arweave`, `IAjaxSettings`, `Matrix4x4`, `SerializedConcreteTaskInstance`, `btCollisionShape`, `ArrayProps`, `ODataFunctionResource`, `IGetExportConfigsResponse`, `IModelType`, `ZWaveError`, `IconConfig`, `Rebind`, `RunningGameInfo`, `IApiTag`, `EllipticPair`, `SurveyResultModel`, `GetMembersCommand`, `V1Service`, `XmlNamespacesCommandInput`, `K3`, `AccountFilterParams`, `ZosJobsProvider`, `ICombo`, `B11`, `OpenApiOperation`, `SkinnedMesh`, `IsSkipFeature`, `ToolChoice`, `GetStaticPaths`, `CreateCommentDto`, `ServiceMonitoringServiceClient`, `UnicodeSurrogateRangeTable`, `AnyFn`, `BlockInfo`, `Web3.CallData`, `ioBroker.Object`, `RefLineMeta`, `ITemplateMagic`, `AssignmentStatus`, `StubHelper`, `Highcharts.MapLatLonObject`, `ContentLoader`, `MappableType`, `SuiComponentFactory`, `HTTPMethod`, `FormContextValue`, `PopStateEvent`, `PluginWriteActionPayload`, `TextDocumentContentChangeEvent`, `PluginsServiceSetupDeps`, `NumRange`, `SizeNumberContext`, `MonitorRuleDef`, `ClockFake`, `ex.ExcaliburGraphicsContext`, `IndexRangeCandidate`, `ListVodSourcesCommandInput`, `PopperOptions`, `GetServerSideProps`, `BuildEdgeStyle`, `ChoicesType`, `AuditLog`, `ResolvedLibrary`, `CounterST`, `ReadyPromise`, `PaintServer`, `ANSITerminalStyleRenderer`, `BeanDefinition`, `ScopeSelector`, `ContractAbstraction`, `React.Props`, `InboundMessage`, `MongoManager`, `RawBlockHeader`, `TagExpr`, `Geom.Rect`, `HomeAssistant`, `ImageMimeType`, `PureComponent`, `MetaType`, `IToast`, `IconService`, `tcp.Connection`, `AddToLibraryAction`, `Definition`, `RTCRtpReceiver`, `PropertyNode`, `CustomFormControl`, `TRK1AnimationEntry`, `PluginWrapper`, `mapProperties`, `MangaDetailsFields`, `LogValueArgs`, `ParsedLocator`, `TdpClient`, `ToolAttr`, `ReportData`, `Dir`, `AssetModule`, `ChromeConnection`, `DMMF.ArgType`, `postcss.Rule`, `ExportData2DArray`, `WebElement`, `ResolvedSimpleSavedObject`, `hapi.Request`, `JobStatusResult`, `FileManager`, `RuleResult`, `OneNotePage`, `DidSaveTextDocumentParams`, `InstanceMember`, `PR`, `HsCommonLaymanService`, `RtcpSrPacket`, `IndexSymbolData`, `GeoVector`, `tl.FindOptions`, `AmbientLight`, `Work_Client.WorkHttpClient2_1`, `BrowseCloudDocument`, `apid.EncodeId`, `ClassThatUseDifferentCreateMock`, `Attribution`, `SequelizeModuleOptions`, `AbstractControl`, `OptionName`, `WheelEventState`, `Moltin`, `ListUnspentOptions`, `BulkActionProps`, `AutorestDiagnostic`, `IStdDevAggConfig`, `DataSourceSpec`, `IDynamicOptions`, `MDCRippleFoundation`, `TModel`, `Pitch`, `ManifestData`, `keyboardJS.KeyEvent`, `BrowserWindowConstructorOptions`, `DragEventHandler`, `SnapshotPublicData`, `TaxonomicFilterGroup`, `YellowPepperService`, `PDFWidgetAnnotation`, `BatchCertificateClaim`, `ShapeField`, `ICommandItem`, `GraphQLTaggedNode`, `IDejaDragEvent`, `ITourStep`, `InvoicePromo`, `SubjectDataSetFilter`, `IGitExecutionOptions`, `TransactionWithStatus`, `NetworkSettings`, `CountdownEvent`, `BlockData`, `PresentationManager`, `ParjsCombinator`, `TestData`, `JustifyContent`, `MapLayersService`, `BridgeToken`, `ColumnConfiguration`, `AssociatePackageCommandInput`, `FsFolder`, `mjAlerts`, `ArrayList`, `ChartParameters`, `requests.ListHttpMonitorsRequest`, `CssParser`, `capnp.Orphan`, `ISelectProps`, `ListContactsCommandInput`, `LoopAction`, `ElemAttr`, `jasmine.SpyObj`, `TemplateNode`, `ScannedDocument`, `IGetTimeLogInput`, `CLINetworkAdapter`, `ConfiguredPluginsClient`, `FrameParser`, `RX.Types.DragEvent`, `ObjectMakr`, `EC`, `RPC.KVClient`, `RefForwardingComponent`, `RuleTypeRegistry`, `ts.Block`, `ReadModelRegistry`, `MockAirlineService`, `ConnectState`, `StateChannelExitClaim`, `SimpleASTNode`, `RollupTransaction`, `EmbeddableEditorState`, `CSSResultGroup`, `Ulonglong_numberContext`, `CanvasThemePalette`, `Http3RequestMetadata`, `ICoords`, `SFUISchema`, `DestinationJson`, `TestReader`, `ExpirationDateVerification`, `ReadyType`, `EntityField`, `IndexedClassRewrite`, `InputFieldDefinition`, `ModelOptions`, `Filter`, `Bbox`, `TSelected`, `ColumnChunk`, `PostFrameUpdateType`, `FocusEventHandler`, `ResourceDayHeaderWrapper`, `IDeliveryNetworkResponse`, `ITitusServerGroupCommand`, `RBNFSymbols`, `ListPingProbeResultsRequest`, `GeneratorState`, `ts.LanguageServiceHost`, `MethodNames`, `HdDogePaymentsConfig`, `NotifyArgs`, `IDireflowConfig`, `NgrxJsonApiZone`, `CrochetCommandBranch`, `DirectoryIndexOptions`, `CreatePipelineCommandInput`, `ITransactionIdentifier`, `TinyColor`, `VoidAnyEvent`, `AlainDateRangePickerShortcutItem`, `Hentai`, `GraphQLRequestContext`, `ShareAdditionContent`, `SdkRemoteParticipant`, `QueueSSEService`, `MikroORM`, `ReportFilter`, `WorkspaceExtImpl`, `Submit`, `BookmarkMetadata`, `SetVaultParameter`, `CollisionShape`, `angular.IScope`, `ProcessDataService`, `MentionData`, `serviceDefinition`, `FunctionWithKey`, `UrlForwardingPlugin`, `WebContents`, `HealthCheckResult`, `QueryBus`, `UserMentionEntity`, `OrderStatusReport`, `MagentoProduct`, `EventInfo`, `ChecklistTask`, `JobName`, `IChangeDiscussionItem`, `ReadStorageObjectsRequest`, `UpSetAddons`, `ContractManifest`, `ArrayComparator`, `GenesisProtocolProposal`, `AssociationLifecycleState`, `SavedObjectManagementTypeInfo`, `OmvGeometryType`, `Vector2Like`, `EIP712TypedData`, `ExportFormat`, `OperatorValueFilterDescriptor`, `TxtParentNode`, `CourseActions`, `CountModel`, `MomentDateAdapter`, `LogItemProps`, `GraphQLRequestConfig`, `ParsedExampleTree`, `msRest.Mapper`, `BaseEncryptedPacket`, `DefaultItemType`, `MockMessage`, `ChangeProjectCompartmentDetails`, `InMemoryFileSystem`, `MidiDevice`, `MMOnlineStorage`, `BazelBuildEvent`, `PartialResolvedId`, `StateChannelJSON`, `XPCOMObserverTopic`, `ImageProvider`, `SapphireDbService`, `IMessageFromBackground`, `SharedTreeSummaryBase`, `Hono`, `StopFlowCommandInput`, `DsnComponents`, `DataFactoryClient`, `d.PrerenderStartOptions`, `TypedMessageRendererProps`, `model.domain.DomainElement`, `WebSocketServer`, `ComponentOrTag`, `LoDashStatic`, `Plugin`, `FunctionMap`, `SpeechRecognitionEventArgs`, `FrameResult`, `QueryMiddlewareParams`, `BuildImpl`, `TypedMap`, `WikiPage`, `OhbugMetaData`, `GetterTree`, `HtmlNode`, `GraphQLServer`, `Strip`, `DataLimit`, `GenericTwoValues`, `LRUItem`, `LatLngExpression`, `ErrorListener`, `FailureInfo`, `GluegunAskResponse`, `BodyPixOperatipnParams`, `CollectionDataService`, `EnvironmentVariable`, `DevtoolsPluginApi`, `ReleaseOptions`, `ConfirmationDialogService`, `TodoItemEntity`, `IFormControlProps`, `TopMiddleBottomBaseline`, `LogBuilder`, `Rpc`, `AggregatedResult`, `MetricServiceClient`, `LGraphNode`, `RadarrSettings`, `TreeSeriesNodeItemOption`, `DeviceAccess`, `IndexAliasData`, `jsPDFDocument`, `TapGesture`, `IPlayer`, `ProjectId`, `DatedAthleteSettingsModel`, `ModuleConfiguration`, `RemoteService`, `TwingTokenStream`, `ExprWithParenthesesContext`, `BitbucketAuthTokenRepository`, `SobjectResult`, `THREE.Light`, `PaginationResult`, `CommonPrefix`, `ApiResult`, `TService`, `MacroKey`, `PolyDrawing`, `KillRingEntity`, `FlagsT`, `PDFAcroRadioButton`, `PropTypes`, `PatternLibrary`, `IEditorStore`, `IMacroBuffer`, `CodeLensParams`, `LineMetrics`, `RetryConfigState`, `SchemeObject`, `InlineControl`, `GraphQLCompositeType`, `ParserOptions`, `DissociatePackageCommandInput`, `OperationRequestDetails`, `MovementType`, `FeedPost`, `MultiChannelAssociationCCSet`, `MockErc20Token`, `AccountingTemplateService`, `IBuffer`, `DeferredDefinition`, `DatabaseStatus`, `ApiOperation`, `IErrorInfo`, `PropTypeFinder`, `IProtocolConstructor`, `Bodybuilder`, `DefaultOptionType`, `IMeetingRepo`, `StateFor`, `TestEnvironment`, `d.JsonDocsEvent`, `SymbolSet`, `AppStore`, `CalendarCell`, `PortablePath`, `XMLBuilderContext`, `WorkingHour`, `VisualizeEmbeddableConfiguration`, `TouchingElementInfo`, `OrganizationPoliciesConfig`, `StringEncodedNumeralFormat`, `FunctionDefinitionNode`, `IProposal`, `FieldsConfig`, `SpeakersState`, `HdDogePayments`, `ApplicationContainerState`, `PageScrollService`, `StrokeProtocol`, `DefaultKernel`, `JsonVisitor`, `numericRootOfPolynomial`, `TaskCallback`, `ChannelItem`, `V`, `ChooseImageSuccessCallbackResult`, `execa.Options`, `EventKind`, `DeployHelper`, `DFS_Config`, `UserFormValues`, `JWK.Key`, `Walker`, `ImportBlock`, `TupleData`, `CombatStateRecord`, `PythonPathResult`, `LRParser`, `ShapeT`, `CpuUsage`, `DeleteChannelBanCommandInput`, `Bip32Options`, `ServerlessAzureConfig`, `AccessorEntry`, `JiraColumn`, `MessageConfig`, `SendTx`, `IncrementDirection`, `MetricsPublisher`, `DeployedPlugin`, `CalculatePvService`, `Union`, `DecodedAddress`, `WritableStreamDefaultWriter`, `IntervalSet`, `MultProof`, `EmbeddableStart`, `FunctionalUtilities`, `IRowAPI`, `PlexMetadata`, `MockSelector`, `ComboBoxGroupedOption`, `InanoSQLTable`, `UpdateLongTermRetentionBackupParameters`, `Articulations`, `ECR`, `AnyId`, `ProjectInitializerConfig`, `requests.ListUserGroupMembershipsRequest`, `HttpPrefixHeadersCommandInput`, `OpticsContext`, `fun`, `StandardContentToolsProvider`, `JoinOptions`, `MrujsPluginInterface`, `MetaProps`, `PasswordHistoryView`, `IGceHealthCheck`, `fs.PathLike`, `WinstonLogger`, `UpdateProjectDto`, `FeedbinConfigs`, `SVString`, `ModelObject`, `ResourceKind`, `VirgilPublicKey`, `Survey.JsonObjectProperty`, `TutorialSchema`, `HelpError`, `CalloutArrow`, `VpcContextQuery`, `IBinaryDataConfig`, `ts.ExportAssignment`, `ConstantState`, `K.ExpressionKind`, `PointCloudOctree`, `WrapperLayerArgs`, `WaveformItem`, `InferGetStaticPropsType`, `DiffResultMessage`, `ConfigMetaFormat`, `CocSnippetPlaceholder`, `DescribeOrderableDBInstanceOptionsCommandInput`, `HelpCenterArticleService`, `BoolValue`, `LuaFiledCompletionInfo`, `PutConfigurationSetReputationOptionsCommandInput`, `CHR0_NodeData`, `SubschemaArgs`, `TSESTree.CallExpression`, `SelectionRangeParams`, `WexBimProduct`, `QueueNode`, `StorageFieldItem`, `NpmPackageManipulator`, `BinaryPaths`, `CreateGlobalClusterCommandInput`, `LanguageModelCache`, `LightGroupCircuit`, `GetUpdateConfigParams`, `RollupOptions`, `MockSocket`, `TreeCheckboxStateChangeEventArgs`, `ContextMenuRenderer`, `LaunchOption`, `DrawerHelperOptions`, `LatLngBounds`, `MsgRevokeCertificate`, `TimeRanges`, `InputAndOutputWithHeadersCommandInput`, `IHeaderState`, `ScalarTypeDefinitionNode`, `Collateral`, `i18n`, `ReadTransaction`, `TypeType`, `RangeSliderProps`, `IRuleConfig`, `StateMachine.State`, `ProfileStates`, `DrawerProps`, `SavedObjectsOpenPointInTimeOptions`, `VocabularyCategory`, `FormArrayState`, `ResponseReceivedEvent`, `QuadrantType`, `THREE.Mesh`, `RegistryDocument`, `PostgresClient`, `DescribeCertificatesCommandInput`, `ListSchema`, `IRemoteRoom`, `FabFilesObject`, `WithGenericsSub`, `BaseRenderer`, `RenderFlex`, `PmsiListType`, `InterfaceTypeWithDeclaredMembers`, `Phaser.Input.Pointer`, `ServerTreeItemPageObject`, `BSQRegex`, `TemplateWrapped`, `OperationStack`, `IAnyExpectation`, `BinaryStream`, `Node.Event`, `google.maps.MouseEvent`, `MigrateResolve`, `YfmToc`, `TrackedHasuraEventHandlerConfig`, `VulnerabilityAssessmentName`, `NodeGroup`, `FACE`, `Prefixer`, `Denomination`, `IFile`, `NextCommandOptions`, `PouchFactory`, `IArea`, `SeriesDataSortingOptions`, `StmtDiff`, `SoundService`, `ChainType`, `EventBus`, `AnimationInfo`, `Hooker`, `ScryptParams`, `Tenant`, `STComponent`, `ServiceEndpointPolicyDefinition`, `EmbeddableStartDependencies`, `ArticleList`, `TokenAmount`, `ManagementDashboardSummary`, `ObjectProperty`, `PipelineStatus`, `InternalApplicationSetup`, `PerformanceObserver`, `StateDecorator`, `TypeCheck`, `HapiResponseAdapter`, `DelayFunction`, `DynamicRepository`, `QueryExpressionBodyContext`, `ClrQuickListValue`, `LookupDescriptor`, `NamedMatchMediaProps`, `XmlComponent`, `PrunerPiece`, `Luna`, `IOrchestratorState`, `MangolState`, `ConfigurationPropertyValue`, `IAnalyticsService`, `Boost`, `Discord.Message`, `ComponentConfiguration`, `Remirror.CommandDecoratorOptions`, `MockResponse`, `TransactionInput`, `IDiagnosticsRow`, `Errorable`, `TimestampFormatHeadersCommandInput`, `ReadonlyVec4`, `CompositionTypeEnum`, `PossiblyAsyncOrderedHierarchyIterable`, `d.CssToEsmImportData`, `ImageFiltering`, `TasksStore`, `PhantomWallet`, `ModifyDBSubnetGroupCommandInput`, `VirtualDirectory`, `ChannelPermissionOverwrite`, `NamedArrayBufferSlice`, `d.TestingConfig`, `HttpsAgent`, `IMergeFile`, `BackgroundReplacementVideoFrameProcessorObserver`, `CreateRegexPatternSetCommandInput`, `ExpectedResponse`, `NodeRange`, `RequireStatementContext`, `SetupFunc`, `OnFailure`, `TouchEventHandlerType`, `DBType`, `GraphQLArgument`, `Chart.CallbackFunction`, `SmallLicense`, `NormalMod`, `requests.ListTopicsRequest`, `AuthCore`, `BaseAppearanceService`, `NumberInput`, `ITelemetryErrorEvent`, `ProjectRisk`, `DeployedWallet`, `Completion`, `Apply3`, `I18NService`, `AuthenticationProvider`, `FilterCondition`, `PushResponse`, `ISearchStart`, `FnU3`, `InsertionType`, `GridPattern`, `Study`, `EngineArgs.MarkMigrationRolledBackInput`, `HandleElement`, `SegmentRange`, `BorderStyleProps`, `CoreEditor`, `S`, `BasicObstacleSide`, `MetadataPackage`, `VtxLoaderDesc`, `EntryObject`, `CollisionDirector`, `RawCard`, `GetInviteCommand`, `fs.Stats`, `BackwardScanner`, `CipherAlgorithm`, `SlideUIEvent`, `IHttpService`, `LocaleMap`, `StockData`, `HaveIBeenPwnedApiResponse`, `RoadmapType`, `ClientItemViewModel`, `instantiation.IConstructorSignature4`, `Quote`, `ts.TypeChecker`, `IFiles`, `CellInfo`, `BlockAction`, `BezierCurve3d`, `LaunchTemplateSpecification`, `CreateAppOptions`, `IMetricListener`, `Loader`, `ReserveData`, `TimelineRowStyle`, `Actor`, `ListNamespacesCommandInput`, `ActiveSpeakerPolicy`, `CSSDesignToken`, `GenericStatusModel`, `BandFillColorAccessorInput`, `AppCommitment`, `FetchMock`, `SectionProps`, `UserGroupList`, `ExpandPanelAction`, `ResponseStatus`, `UpdatePhotoDto`, `IContentSearchResponse`, `JudgeClientEntity`, `RestModelEntry`, `ViewElement`, `MathViewProps`, `AgentConfigOptions`, `ITextModel`, `Preflight`, `RouteRecognizer`, `StynRule`, `StrokeCountMap`, `PageTitleService`, `EntryType`, `CursorConnectionType`, `GetMemberCommandInput`, `ListModelsCommandInput`, `MigrationStates`, `RouterSpec`, `InitializeMiddleware`, `TSerDeOptions`, `XmlBlobsCommandInput`, `LgQuery`, `dStage_stageDt_c`, `ErrorReporterConstructorContract`, `SaplingNativePlugin`, `Handshake`, `FabricEnvironmentRegistry`, `tags.Table`, `Cohort`, `TemplateManifest`, `IRecordedDB`, `ThemeSettings`, `DeleteAppInstanceAdminCommandInput`, `DMMF.SchemaArg`, `HomeOpenSearchDashboardsServices`, `ComponentMeta`, `TokenPair`, `ReferenceDirection`, `MatchedMiddleware`, `requests.ListDbHomePatchesRequest`, `ITreeNode`, `SVGSVGElement`, `ChartActionContext`, `PolicyDocument`, `PadModel`, `DisplayValueSpec`, `requests.ListVmClusterUpdateHistoryEntriesRequest`, `StoreGroupLike`, `preference.Set`, `MarvinImage`, `NgAddOptions`, `ReadableByteStreamController`, `DevtoolsInspectorProps`, `DevtoolsBackend`, `TriggerId`, `Tab`, `d.ModeStyles`, `RelativeBandsPadding`, `AdapterFindOptions`, `requests.ListAutonomousExadataInfrastructuresRequest`, `MDCTabDimensions`, `AnalyzedStyle`, `KeyStrokeOptions`, `ShowOptions`, `d.OutputTargetCopy`, `ThyScrollService`, `ISearchEventDataTemplate`, `AppContextService`, `TagResourceCommandInput`, `AnnotationLayer`, `WidgetRegistry`, `ShareContextMenuPanelItem`, `LetAst`, `Studio.App`, `ExclamationToken`, `LibrarySeriesSeasonEpisode`, `EthereumLedger`, `Box2`, `ViewerParameters`, `DefinitionParams`, `CollisionGroup`, `TagsBase`, `EC2`, `ItemList`, `DoorLockCCConfigurationReport`, `TRuleResolver`, `ThyFullscreenRef`, `TranslationDictionary`, `TagListMessage`, `FilterFn`, `ModItem`, `IResultSetUpdate`, `FileChange`, `Assert`, `MethodMaterial`, `DBClient`, `Iterate`, `DeviceManagerClient`, `IncrementalNode`, `MoveOptions`, `ListPackagesRequest`, `Auditor`, `GraphQLRequestContextWillSendResponse`, `FullPageScreenshotDataOptions`, `AppLogger`, `TracerConfig`, `TextBuffer`, `AsBodilessImage`, `AppComponent`, `SharedValue`, `ResourceLines`, `Boss`, `TypeSystemPropertyName`, `ConnectionOptions`, `TestAwsKmsMrkAwareSymmetricKeyring`, `UrlLoader`, `IMrepoDigestConfigFile`, `React.DetailedHTMLProps`, `moneyMarket.overseer.CollateralsResponse`, `EntityMetaData`, `Applicative`, `LazyResult`, `PropertyDeclaration`, `TaggedNumericData`, `Sheets`, `IContainerContext`, `ServiceConfigs`, `IDestination`, `FileBuffer`, `ListDomainDeliverabilityCampaignsCommandInput`, `DispatchFunc`, `parse5.Element`, `ExpressionServiceParams`, `FounderConfig`, `ICompanionElement`, `SpotifyApi.CurrentUsersProfileResponse`, `BaseFee`, `StitchesProps`, `RetryPolicy`, `Classification`, `SessionPort`, `DescribeDBClusterEndpointsCommandInput`, `Alert`, `Pojo`, `VersionRange`, `InternalRequestParams`, `ClientRequestResult`, `ErrorRes`, `DropTargetOptions`, `Complex`, `ServeD`, `ContractDecoratorKind`, `OnUpdate`, `common.Keybinding`, `Department`, `FieldResolver`, `OsdServer`, `ContentService`, `ExtendedCanvasRenderingContext2D`, `S2CellType`, `ZoneInfo`, `IKeyboardDefinitionAuthorType`, `NewsItemModel`, `FileChunkIteratorOptions`, `GitCommitLine`, `AssetID`, `tabBrowser`, `TronSignedTransaction`, `DayStressModel`, `BMDObjectRenderer`, `NodeSubType`, `RpcMessage`, `DevcenterService`, `FeatureOptions`, `ResponseIssue`, `WebElementPromise`, `EChartsType`, `EpochTracker`, `ContentControl`, `Battle`, `ModifyDBClusterParameterGroupCommandInput`, `TitleVisibility`, `UserIdentity`, `ts.TaggedTemplateExpression`, `AvailabilityDomain`, `H5GroveEntityResponse`, `EthereumTransactionOptions`, `jasmine.CustomMatcher`, `DeepMapResult`, `Create`, `DaffCompositeProductItemOption`, `... 12 more ...`, `NSAttributedString`, `RemoteVideoStreamState`, `UpdateApplicationCommandInput`, `CompositeReport`, `SnippetsMap`, `CreateContextOptions`, `CodeGenFieldConnection`, `IAzExtOutputChannel`, `BarGeometry`, `requests.ListExportsRequest`, `kbnTestServer.TestElasticsearchUtils`, `EventType.onInit`, `ChainTransaction`, `SeparableConvParams`, `MatchProps`, `ModelsTreeNodeType`, `SubscriberRepository`, `TMouseEventOnButton`, `Socket`, `NonTerminal`, `Storage`, `FocusTrapInertStrategy`, `d.E2EProcessEnv`, `AnyQuery`, `PropertyContext`, `IUnlisten`, `Key3`, `OneIncomingExpectationRepository`, `CollectMultiNamespaceReferencesParams`, `MemberService`, `DownloadStreamControls`, `GitlabAuthResponse`, `DescribeConfigurationRevisionCommandInput`, `IDimensions`, `GraphicsItem`, `Times`, `ServiceOptions`, `Value2D`, `UI5Class`, `HTMLAttributes`, `INormalAction`, `CommonTokenStream`, `Http3QPackDecoder`, `MimeBuffer`, `ShaderityObject`, `JSXAnalysis`, `CompletionsCollector`, `UpdateGroupCommandInput`, `PitchShifter`, `NzModalService`, `BlockDisk`, `NavigationGuard`, `NodeData`, `TargetTypeMetadata`, `FileCommitDetails`, `PlotLineOptions`, `ElementFlags`, `BuildArtifacts`, `aws.s3.Bucket`, `InternalStack`, `OnTabReselectedListener`, `FsWriteResults`, `FilterCategory`, `TextEditorElement`, `TaskInfoExtended`, `ODataPropertyResource`, `EmissionMaterial`, `Platform`, `SingleObjectWritableStream`, `BoundingRect`, `ActorAnimKeeperInfo`, `OAuth2Service`, `T.ID`, `HTMLTemplateElement`, `WidgetView.IInitializeParameters`, `ModeRegistration`, `StatementContext`, `ArticleItem`, `FrameRateData`, `filterSymbols`, `PropertyAccessExpression`, `SCNNode`, `NgModuleTransitiveScopes`, `RpcResponseAndContext`, `RetryConfig`, `LikeNotification`, `CacheListener`, `BearerTokenResponse`, `OrderedId64Iterable`, `SetValueOptions`, `UpdateGlobalSettingsCommandInput`, `Motion`, `EnumTypeDefinitionNode`, `QueryState`, `IEventStoreData`, `ManifestInfo`, `INodeDef`, `TradeResponse`, `IndyWallet`, `MsgType`, `TemplateValidatorOptions`, `SubStmt`, `SavedObjectDashboard`, `MiscellaneousField`, `IArrivalTimeByTransfers`, `PageInfo`, `PadchatRpcRequest`, `TypeSystemEntity`, `ActionImpl`, `FrameTree`, `Shading`, `solG2`, `requests.ListVaultsRequest`, `requests.ListNatGatewaysRequest`, `T18`, `IGherkinDocument`, `CmdletParameters`, `ExitStatus`, `CodeExecutionEmitter`, `DeliveryOptions`, `SelectionModel.ClearMode`, `FileSystem`, `IStaticMeshComponentState`, `EditState`, `LogicalElement`, `com.google.firebase.firestore.Query`, `ReadFn`, `FullNote`, `PointerDragEvent`, `MoonbeamCall`, `DescribeIndexCommandInput`, `StatusChartStatusMesh`, `Resort`, `ApolloQueryResult`, `LineChartProps`, `UnitValue`, `QuerySubState`, `LoggingOptions`, `ITelemetryData`, `DaffCartLoading`, `ServerConfiguration`, `CompilerEventBuildStart`, `GX.KonstColorSel`, `PropagationResults`, `UseStylesProps`, `SuperExpression`, `ServerAccessKeyRepository`, `CreateDBParameterGroupCommandInput`, `SharedContentInfo`, `LeafletMouseEvent`, `Cluster`, `ViewPortItem`, `NodeCryptoCreateCipher`, `requests.UpdateJobRequest`, `RLANAnimation`, `WorkItemQuery`, `NavigateToPath`, `PouchDatabase`, `GenericResource`, `EndOfLine`, `LeafletContextInterface`, `FirebaseFirestore.DocumentReference`, `Mp4BoxTree`, `RequestParams`, `Preprocessors`, `ViewerOptions`, `TypeSelectionProps`, `NgScrollbarBase`, `CssAstVisitor`, `BluetoothRemoteGATTServer`, `OrgType`, `IDynamicStyleProperty`, `HdTronPaymentsConfig`, `CmsEntryPermission`, `HsLayerFlatNode`, `ItemsList`, `IntNumber`, `InterpolationType`, `ParsedConfig`, `ItemUUID`, `EasyPzCallbackData`, `LogicalCpuController`, `VertexAttribute`, `WorkspaceState`, `JsonlDB`, `OutputsType`, `EnumValues`, `TileInputs`, `MediaStreamAudioDestinationNode`, `GraphWidget`, `FontWeight`, `TAuthor`, `ListOperations`, `BackblazeB2File`, `WidgetManager`, `UserSubscriptionsInfo`, `ElementOptions`, `ConsoleMessageLocation`, `CompilerSystemWriteFileResults`, `IImposer`, `PyteaOptions`, `DejaTilesComponent`, `DraymanComponent`, `DatabaseItem`, `LayerStyle`, `CallSignatureDeclaration`, `SerializableState`, `ListModelConfig`, `unchanged.Unchangeable`, `IReversibleJsonPatch`, `PathHash`, `Route`, `RegistryVarsEntry`, `SfdxFalconResultRenderOptions`, `NodeCG`, `requests.ListDbHomesRequest`, `TsGenerator.Factory.Type`, `HTMLProps`, `KernelProfile`, `DrawConfig`, `MockedFunctionDeep`, `PlacementProps`, `SymbolAccessibilityResult`, `d.Logger`, `MethodResponse`, `ExceptionHandler`, `ColumnAnimationMap`, `MessageMatcher`, `ExecInspectInfo`, `ThyTreeService`, `EventTypeMap`, `IAsyncParallel`, `ConnectableObservable`, `ProgressTracker`, `SortPayload`, `RTCIceParameters`, `Announcement`, `MiddlewareOverload`, `FixtureSetupDeps`, `GlyphElement`, `UserDetails`, `DynamicGrammarBuilder`, `EventMetadata`, `DepthModes`, `Living`, `WatchOfConfigFile`, `GfxDebugGroup`, `FreezerInstance`, `ParsingMetadata`, `QueueService`, `DocumentType`, `ElementRenderer`, `PinOverrideMode`, `MapObjActorInitInfo`, `DirectiveType`, `SavedObjectsFindResult`, `PropertyKey`, `IRules`, `ComponentMetadata`, `ParameterizedContext`, `RootConnection`, `VisitorContext`, `MuteRoomTrackRequest`, `ItemDataType`, `ModelIndexImpl`, `ChannelResult`, `AsyncTask`, `CookieStorage`, `IUserSession`, `serviceRequests.GetJobRequest`, `ObservableDbRef`, `RtkQueryMonitorState`, `XPathData`, `DateBatch`, `PutSessionCommandInput`, `ControllerAction`, `DateProfileGenerator`, `ActionFunction1`, `requests.ListContainerDatabasePatchesRequest`, `DeleteExperimentCommandInput`, `GroupIdentifier`, `DAGDegrees`, `DryRunPackagePolicy`, `d.JsonDocsComponent`, `PropertyToValues`, `DropDownElement`, `OutStream`, `DragRefInternal`, `DealRecordsConfig`, `ResultTreeNode`, `ReakitTabInitialState`, `MaxHeap`, `SQLQuery`, `AccountGameCenter_VarsEntry`, `SendPayload`, `UpdateIntegrationCommandInput`, `ViewCompiler`, `RelocateNodeData`, `DecorationRenderOptions`, `GetCertificateResponse`, `SfdxTask`, `NormalizedEntrypointItem`, `RecurringBillId`, `FileMap`, `GeocoderQueryType`, `SerializationContext`, `IViewModel`, `CountryCode`, `Mongoose`, `NavNodeInfoResource`, `BaseShape`, `Cocoen`, `ElevationRangeSource`, `IPortfolio`, `StringTypeMapping`, `ConditionGroup`, `CustomSprite`, `PxtNode`, `StageCrewMember`, `ListExportsRequest`, `Permission`, `NoteName`, `AlertClusterStatsNode`, `XMLHttpRequest`, `TrackEventType`, `ProblemEntity`, `RestManagerRequestData`, `BackwardIterator`, `AddSourceIdentifierToSubscriptionCommandInput`, `ScrollToOptions`, `AsyncStorageHandler`, `EmitOptions`, `ControlFlowGraph`, `RectL`, `DocFn`, `Float32Array`, `WarpPod`, `FileReader`, `PluginManifest`, `HasLocation`, `SceneExport`, `CrossTypeHost`, `BlockbookBitcoin`, `BlockClass`, `ClassificationResult`, `IndexNode`, `Atom.Range`, `IndexPatternPrivateState`, `INamedVector`, `DetailListService`, `BasePathCoverage`, `InstallationsFile`, `MailSettings`, `IAuthUserWithPermissions`, `ISearchState`, `RenderNode`, `Iso`, `FiltersCreationContext`, `UpdateDomainCommandInput`, `AttributeToken`, `AuthStatus`, `MenuBuilder`, `StepperState`, `NetNode`, `Sky`, `IFilterInfo`, `Modal`, `SelectContainerProps`, `HsvaColor`, `PluginStreamActionPayload`, `ParsedInterface`, `NineZoneStagePanelsManager`, `AESKey`, `RegistrationType`, `IPrimitiveExpression`, `GPUTextureView`, `MemoizedSelectorWithProps`, `TreeNodeWithOverlappingSubTreeRoots`, `TStoreName`, `DeleteApplicationCommandOutput`, `CharUnion`, `ToastActions`, `TEffects`, `NavigationEnd`, `FragmentableArray`, `RRule`, `FileTransfer`, `XmlRecording`, `LicenseType`, `messages.Source`, `GameObjectGroup`, `DaffCategory`, `UInt32Value`, `WebviewEvent`, `WidgetOpenerOptions`, `ReceiverEstimatedMaxBitrate`, `DataPin`, `React.KeyboardEventHandler`, `WorkerPool`, `ApolloCache`, `MockProviders`, `JsonSchemaRegisterContext`, `GroupData`, `GraphExecutor`, `StateType`, `android.view.LayoutInflater`, `BoxPlotPoint`, `UIPageViewControllerImpl`, `BitcoinSignedTransaction`, `SetOptions`, `EmitOutput`, `DefinedSmartContract`, `UrlObject`, `UploadState`, `KayentaCredential`, `JobMetadata`, `Fig.Generator`, `DatepickerDialog`, `IStandardEvent`, `Turmoil`, `IStatisticSum`, `DukBreakPoint`, `DotenvLoadEnvResult`, `SessionResponse`, `IStructuredSearch`, `IOrchestrationFunctionContext`, `ParserAstContext`, `ProposalMessage`, `Upgrades`, `SignupRequest`, `ScanMetadata`, `NotificationList`, `STPAPIClient`, `vscode.SymbolKind`, `El`, `PlayerList`, `RegistryPackage`, `BlockchainContext`, `BotFilterFunction`, `PresentationPreviewAttribute`, `Separated`, `LoanMasterNodeRegTestContainer`, `CurrentUserType`, `SystemVerilogContainerInfo`, `IImageFile`, `FluentDOM`, `eventHandler`, `FoundationElementRegistry`, `ObjectAssertion`, `SceneObjectBehavior`, `EvaluatedStyle`, `RateType`, `IAudio`, `RBTree`, `ts.PropertyAssignment`, `HoverInput`, `PointString3d`, `InvalidatorSubscription`, `CurveLocationDetailArrayPair`, `PDFContext`, `MDCChipSetAdapter`, `RectangleConstruction`, `ICSVInConfig`, `IEmployeeUpdateInput`, `NotificationTemplateRepository`, `QueryCacheKey`, `PaymentRequest`, `Conic`, `HttpServiceBuilderWithMetas`, `KeyFunc`, `FluentRuleCustomizer`, `Channels`, `ButtonToggleComponent`, `KeyframeAnimationInfo`, `messages.PickleTable`, `IGetActivitiesInput`, `SQLiteTableDefinition`, `IModelIdArg`, `PathProxy`, `LayerDescriptor`, `StatementListNode`, `SpecQuery`, `AccountRegistry`, `PropertyDescriptorMap`, `RouteRecordNormalized`, `SubscriptListResult`, `DndService`, `ClubEvent`, `MutationListener`, `PathInfo`, `Json.StringValue`, `NameInfoType`, `I18nEntries`, `WalletSigner`, `BigNumber.Value`, `CircularDependency`, `ConnectionCredentials`, `RouteParams`, `ServerCertificateRequest`, `RequireContext`, `StringTableEntry`, `Distributes11`, `NativeContractStorageContext`, `RailsDefinitionInformation`, `GeneratorCore`, `ILinkedListItem`, `ReflectionKind`, `RolePermission`, `RedBlackTreeStructure`, `UILabel`, `PropertyEditorProps`, `VpnServerConfiguration`, `Pools`, `RnM2Node`, `Parse`, `HTMLScWebglBaseChartElement`, `YColumnsMeta`, `DeclarationFlags`, `TabLayoutNode`, `OpenCVConfig`, `AbsoluteSizeSchema`, `BotTimer`, `Now`, `DateInputFormat`, `ExternalServiceIncidentResponse`, `E1`, `Limits`, `TSDocConfiguration`, `ConstructSignatureDeclaration`, `TypeFormatFlags`, `CacheProvider`, `ScrollData`, `GridsterItem`, `UserCourseModel`, `PointList`, `Conv3DInfo`, `ElementAccessExpression`, `MeshColliderShape`, `OrderDTO`, `ClockMock`, `DaffRouteWithDataPath`, `IBookmark`, `LoadedEnv`, `KnotVector`, `ParaType`, `AnnotationChart`, `DebugLogger`, `LabeledScales`, `DecompileResult`, `PostfixUnaryExpression`, `SavedQueryMeta`, `UnauthorizedErrorInfo`, `HdTronPayments`, `ODataConfiguration`, `Snippet`, `IContextProvider`, `TimeInstant`, `NestedPayloadType`, `RTCRtpTransceiver`, `PipelineProject`, `requests.ListManagementAgentInstallKeysRequest`, `V1RoleBinding`, `TargetResourceType`, `GetSemesterTimetable`, `IComparer`, `RollupResult`, `Positions`, `SagaMiddleware`, `Hookable`, `AttributeService`, `SetOptional`, `Id64Array`, `ProfileServiceAPI`, `SetStateCommitmentJSON`, `InviteMemberCommand`, `Common`, `LocalFluidDataStoreContext`, `WalletContext`, `WriteResult`, `Lineup`, `KeyringPair`, `ValidationConfig`, `TaskName`, `TextWriter`, `StepResultAfterExpectedKey`, `CdkDialogContainer`, `ImportSteamFriendsRequest`, `AliasedFeature`, `AutorestArgs`, `IAppVersion`, `ReactiveController`, `TNSImageAssetSaveFormat`, `GXRenderHelperGfx`, `SearchMode`, `OutputAsset`, `Apply2`, `d.ComponentCompilerMeta`, `GroupDocument`, `WorkingDirectoryFileChange`, `CommandClasses.Basic`, `StoreProvider`, `DeploymentCenterData`, `BitcoinNetworkConfig`, `Peer.DataConnection`, `AccidentalMark`, `ProviderResource`, `GetLoggingConfigurationCommandInput`, `AuthResourceUrlBuilder`, `DeltaChangeContext`, `LinkRenderContext`, `FunctionRunner`, `MapEventsManagerService`, `RequestTypes`, `Archives`, `IStepDefinition`, `RangeSelector`, `WorkspaceResourceName`, `SettingsPriority`, `EntityWithEquipment`, `ArrayLiteralExpression`, `AstMetadataApiWithTargetsResolver`, `GetTraceSummariesCommandInput`, `ComplexBinaryKernelImpl`, `IReferenceLayer`, `CorrelationsParams`, `SponsoredAuthorization`, `IndicatorNode`, `IINode`, `ApiDefForm`, `EntityQuery`, `IKeyState`, `OperatorLogPoint`, `GfxrRenderTargetDescription`, `ICreateChildImplContext`, `RequestQueryParamsType`, `Appender`, `ParsedElement`, `TVSeason`, `ValidatePurchaseGoogleRequest`, `ISeinNodeExtension`, `LogDescription`, `Types.ObjectId`, `MsgCreateBid`, `WebRequestMethod`, `LoadableComponent`, `Toc`, `AuthenticateOptions`, `AttachVolumeCommandInput`, `MockDocumentFragment`, `RoxieService`, `EntityCollection`, `Evidence`, `double`, `ProposalTransactionJSON`, `Fontkit`, `CacheRecord`, `LensMultiTable`, `StandardFontEmbedder`, `VisibilityFilters`, `SuiSelectOption`, `ExiftoolProcess`, `IEntity`, `NestView`, `SourceDescriptionItem`, `SendResponseParams`, `CONFIG`, `IGroupDataArray`, `Quad`, `Dependencies`, `SMTIf`, `CombinedVueInstance`, `t.ObjectExpression`, `TSPropertySignature`, `PluginFunctions`, `PrimitiveTypeKind`, `BankAccountService`, `requests.ListVcnsRequest`, `CanvasPinRow`, `StableVer`, `RunService`, `N7`, `RemoteVideoStream`, `ObservableParticle`, `Quantifier`, `React.MouseEvent`, `RequestQueryBuilder`, `RuleViolation`, `ReadonlyESMap`, `AnalysisDataModel`, `CosmosDBManagementClient`, `ChainManifest`, `WatchDog`, `ParticipantsAddedListener`, `StructureLink`, `KeyframesMap`, `SourceTypes`, `TaskFolder`, `BoundElementPropertyAst`, `ContextTransformInfo`, `NpmPublishClient`, `SymbolDataVisibility`, `CurrencyId`, `GitConfig`, `TransactionEventBroadcaster`, `ExtendedChannel`, `FormOptions`, `MutableMatrix44`, `RCloneFile`, `Turn`, `PlatformContext`, `TestContextCustom`, `ITaskWorker`, `MethodNext`, `IParamSignature`, `TSVideoTrack`, `PutRecordCommandInput`, `EnumDescriptorProto_EnumReservedRange`, `Commander`, `BigNumberish`, `ZonedMarker`, `SubnetDescription`, `AccessorCreators`, `VennDiagramProps`, `Travis`, `IRootScopeService`, `WalletName`, `Dialogic.DefaultDialogicOptions`, `TechniqueDescriptor`, `ResourceSettings`, `dGlobals`, `WritableStreamDefaultController`, `ForwardedRef`, `TBase`, `ScreenName`, `ApiParameter`, `NotificationMessage`, `IMouseEventTrigger`, `ErrorExpressionCategory`, `QueryParserListener`, `IProvisionContext`, `CustomMaterial`, `ScraperOptions`, `IIndexPattern`, `MovimientoModel`, `GX.TexGenSrc`, `DefinitionNode`, `AnalyzerNodeInfo`, `ThisParameterType`, `DaffCategoryFilterRangeNumeric`, `VRMBlendShapeGroup`, `RNNCell`, `FieldFormatsSetup`, `SocialTokens`, `PatternClassArgumentNode`, `LaunchConfiguration`, `messages.Envelope`, `LinterMessage`, `UserQuery`, `IModelReflectionData`, `MacroAction`, `HappeningsInfo`, `WorkRequestClient`, `vscode.ExtensionContext`, `EntityManager`, `TokenProps`, `WordMap`, `vscode.Terminal`, `ScopedKeybinding`, `INodeUi`, `StoryMenuItemProps`, `MailerService`, `Helper`, `EventSubscriptionCallback`, `OperatorUser`, `BottomNavigationTab`, `SurveyLogicType`, `request.SuperTest`, `RootParser`, `IConfigService`, `FabricEvent`, `ModelConfig`, `Payment`, `StellarRawTransaction`, `OpenYoloCredentialHintOptions`, `Utilities.EventWrapperObject`, `QueryValue`, `StructName`, `SuiteWithMetadata`, `MergeCSSProperties`, `LinkedContracts`, `ListenerHandler`, `ElementArrayFinder`, `NamespaceMember`, `ParsedParameters`, `PrimitivePropertyValueRenderer`, `TelegramBot`, `IMdcCheckboxElement`, `PropertiesMap`, `CreateEmailTemplateCommandInput`, `AnimationRange`, `IRuleApiModel`, `ISegment`, `SpecPage`, `InjectorService`, `Script3D`, `InitializeResult`, `Roots`, `XUploadNode`, `Result`, `ClientKeyExchange`, `DeleteAppInstanceCommandInput`, `VaultActivity`, `PathNodeItem`, `KeplrSignOptions`, `ScaleType`, `Module1`, `NumberWidget`, `DMMF.Field`, `RequestBodyMatcher`, `SOClient`, `keyComb`, `GraphQLOutputType`, `Models.CommandInput`, `ModuleBuilder`, `SlashArgRecord`, `UpdateGroup`, `NavigationStart`, `DatastoreType`, `StripeShippingMethods`, `ESLint`, `HeatmapData`, `CommentEntity`, `SelectMenuProps`, `RTCDataChannelEvent`, `TimestampsToReturn`, `ParsedResponseHeaders`, `EntitySystem`, `RecordingStream`, `IpRangeKey`, `DOMParser`, `LastError`, `DynamicDialogRef`, `CircleDatumAlternative`, `NodeBank`, `NamedCurveAlgorithms`, `GetCustomVerificationEmailTemplateCommandInput`, `XmlElement`, `ImageMetadata`, `NextRouter`, `BScrollConstructor`, `_STColumn`, `MatchmakerMatched_MatchmakerUser`, `IFaction`, `theia.Uri`, `SchemaRootKind`, `DChoice`, `DocumentMapper`, `RangeProps`, `PropertyRecord`, `ReturnStatement`, `CallAst`, `EncodedPart`, `LayerNormalizationLayerArgs`, `DeclarationParams`, `System_Array`, `WatchCompilerHostOfFilesAndCompilerOptions`, `KmsClientSupplier`, `ClaimedMilestone`, `ASTWithSource`, `OutputTargetDocsCustom`, `Scenario`, `SavedObjectsClientContract`, `KuduClient`, `Shape.Base`, `TagMapping`, `displayCtrl.IShowConfig`, `ProductTranslation`, `CaseStatuses`, `ConstraintSet`, `ServiceDecorator`, `UpdateReplicationConfigurationTemplateCommandInput`, `OrganizationProp`, `s.Node`, `SlackHook`, `AttributeModel`, `ForceGraphNode`, `RowItem`, `IFilm`, `Embedding`, `GreenhouseJobBoardJobNode`, `TextLayoutParameters`, `IGlTFExtension`, `ClickParam`, `ExpressionType`, `AddressState`, `Scheduled`, `KEYS`, `requests.ListAvailabilityHistoriesRequest`, `AsyncStorage`, `t.TETemplate`, `Web3Wrapper`, `ActionForRender`, `AdminIdentity`, `AnimeFields`, `TimeoutID`, `ConfigurableProfilePermissions`, `SagaIterator`, `HEvent`, `AbstractType`, `TE`, `UTXO`, `ConfigModel`, `BlobContainer`, `SendOptions`, `TSESLint.RuleContext`, `LighthouseBudget`, `PromiseConstructor`, `Wrapped`, `FuncVersion`, `TransformerProps`, `ServerSocket`, `TimefilterService`, `QuestionToken`, `IKactusState`, `StacksMessage`, `DisconnectReason`, `EditorPosition`, `AggregatePriceRepository`, `Literal`, `SFieldDescribe`, `Plane`, `JSONSchema7Definition`, `WritableOptions`, `FormErrors`, `PP`, `IGLTFExporterExtensionV2`, `SpreadStorableMap`, `DatabaseSubType`, `__HttpHandlerOptions`, `MethodDefinition`, `EditablePolygon`, `TriumphNode`, `ethers.BytesLike`, `ISPListItem`, `ControllableLabel`, `GetMeshSourceOptions`, `PoiBuffer`, `DistributeArgs`, `UnknownObject`, `IServerConfigModel`, `SearchResultProps`, `RoleManager`, `NgxTranslations`, `ParameterPath`, `ObjectRenderer`, `EventSubscription`, `UINavigationItem`, `VercelResponse`, `CloneRepositoryTab`, `juggler.DataSource`, `WexBimShapeMultiInstance`, `GradConfig`, `TriggerEngine`, `ModelCallbackMethod`, `VNodeLocation`, `ExpressionFunction`, `SomeCV`, `Heatmap`, `UninterpretedOption`, `UserMusicResult`, `DropdownMenuProps`, `MDCTextFieldLineRippleAdapter`, `SentryUser`, `QueryMode`, `Reminders`, `FieldName`, `StockItem`, `HydratedFlag`, `CallOverrides`, `RenameMap`, `ICustomClassUIMethod`, `Command`, `Path6`, `SessionWorkspace`, `AlphaConfig`, `HistoryLog`, `PlaceTradeDisplayParams`, `DataEventEmitter.EventDetail`, `ConvertedDocumentFilePath`, `GlobalTime`, `InjectionMap`, `CustomFont`, `DkrLevel`, `ListAccountsCommandInput`, `ToggleButton`, `IGroupSharingOptions`, `Vault`, `HelpCenterService`, `To`, `DeletePipelineCommandInput`, `CourseTask`, `Calibration`, `d.CompilerJsDoc`, `ZoneOptions`, `IQueryParameters`, `SetMap`, `IOperand`, `TimerState`, `AgentPubKeyB64`, `DAL.KEYMAP_KEY_DOWN_POS`, `Wall`, `SignalingClientEvent`, `requests.ListWorkRequestsRequest`, `TleParseResult`, `ParseTreePatternMatcher`, `requests.ListNetworkSourcesRequest`, `ClusterResource`, `peerconnection.Data`, `Content`, `messages.PickleTag`, `Emitter`, `ListRoutesCommandInput`, `AccessControl`, `ITransportConstructor`, `FolderRequest`, `Detail`, `d.PackageJsonData`, `FlowAssignment`, `TableItemState`, `ExpressRouteGateway`, `ObjectCriteriaNode`, `RBNode`, `TRK1`, `Nodes.DocumentNode`, `TileTestData`, `d.HydrateFactoryOptions`, `PolyIntEdge`, `SourceMapConsumer`, `EquipmentInfo`, `ComputedEnum`, `ExpNumIndex`, `LockHandle`, `UserStoreProperty`, `AndroidConfig.Resources.ResourceXML`, `Order3Bezier`, `ParamValue`, `ExpressionVariable`, `Funding`, `Bindings`, `HTMLTableDataCellElement`, `YAMLParser`, `DynamicFormControlModel`, `GetProjectResponse`, `ListElement`, `SelectionRange`, `StatusResult`, `IOSNotificationAttachment`, `Chai.ChaiUtils`, `BaseSession`, `DescribeReplicationTasksCommandInput`, `BadgeInfo`, `ProtocolEventMessage`, `RealtimeVolumeIndicator`, `BlockHeaderWithReceivedAt`, `Gem`, `ProgramObjects`, `QueryBucket`, `EntryInfo`, `Gain`, `IComment`, `requests.ListRemotePeeringConnectionsRequest`, `NgModuleProviderDef`, `TextMessage`, `IImageData`, `WebContainer`, `GithubBranch`, `Consumer`, `MyType`, `WordArray`, `ForgotPasswordRepository`, `Timefilter`, `SliderEditorParams`, `StationService`, `Disembargo`, `JsonSchemaOptions`, `MetricsSourceData`, `ServerSession`, `PoliticalAgendasData`, `PathFunction`, `HistoricalDataItem`, `AuxBotVisualizer`, `Authentication`, `ObservableQueryProposal`, `FileDescriptor`, `EntryTypes`, `LogPanelLayout`, `DebugProtocol.InitializeResponse`, `ts.PropertyDeclaration`, `OptionsSync`, `Bullet`, `Cypress.ConfigOptions`, `DeploymentSubmission`, `MutableGridCategory`, `ISnapshotTree`, `RenderValue`, `WitnessScopeModel`, `MutationOptions`, `ViewTemplate`, `CachedMetadata`, `StarterOption`, `MembersState`, `ModuleResolutionHost`, `TaskActionsEvaluator`, `ServiceExtension`, `StatisticAverageBlockTime`, `Toaster`, `SeriesItemsIndexesRange`, `ContextValues`, `CreateWebhookCommandInput`, `SegSpan`, `ImportWithGenerics`, `SharedServiceProvider`, `SelectorsMatch`, `BodyContent`, `Nat`, `AccessorFn`, `GfxRenderTargetP_GL`, `RpcMessageBuilder`, `RandGamma`, `UpdateJobDetails`, `LoggerLevelAware`, `XCascadeNode`, `ErrorDetails`, `ApiType`, `Tsoa.Type`, `ApplicationSettingsService`, `TimeInterval`, `SourceNotFoundFault`, `JoinedEntityType`, `EditorSettings`, `DirEntry`, `HomePluginSetupDependencies`, `ResolveStore`, `CombinationConstraint`, `OnClickData`, `OutputEndpointData`, `PluginData`, `IpcMessage`, `TevStage`, `DataField`, `GlobalEvent`, `DBConnectionConfig`, `HTMLSelectElement`, `Web3EventService`, `RelayRequestAny`, `IEnvironment`, `VoiceProfile`, `DogeBalanceMonitorConfig`, `GeneratorNode`, `ReadModelPool`, `NSType`, `PluginClass`, `Reportable`, `GradientPoint`, `IServiceContainer`, `ThrowStatement`, `Hex`, `ActionsType`, `Register16`, `ISearchQuery`, `StateRef`, `MatSortHeaderIntl`, `ReactionHandleOptions`, `ListDatabasesCommandInput`, `BackgroundReplacementOptions`, `TimeoutJobOptions`, `ScopeHook`, `DescribeApplicationsCommandInput`, `ChartPoint`, `theia.Range`, `SymbolInformation`, `WheelmapFeature`, `NativeSystemService`, `DecryptParameters`, `TimeGridViewWrapper`, `MultiSegmentArena`, `DataGridColumn`, `WithdrawByoipCidrCommandInput`, `BlockchainSettings`, `AutoTranslateResult`, `HTMLSpanElement`, `GetCertificateAuthorityCsrCommandInput`, `MockCSSStyleDeclaration`, `CreateDatasetCommand`, `ComposableFunctionArgs`, `UniqueNameResolver`, `CeloTx`, `Deposit`, `ElasticsearchModifiedSource`, `OMapper`, `APIOrder`, `ProfileProvider`, `PublicMilestone`, `Combined`, `ConditionInfo`, `NodeDict`, `ListLoggingConfigurationsCommandInput`, `Upload`, `UserModel`, `FileBoxInterface`, `QueryExpressionContext`, `MessageKeys`, `DocSection`, `AbstractRegion`, `d.StyleDoc`, `SpriteRenderer`, `paneType`, `TemplateToTemplateResult`, `IGenericField`, `AcceptInviteCommand`, `BaseProps`, `AnyModel`, `ShadowRoot`, `TImportError`, `AllocationDoc`, `ShelfFieldDef`, `SecurityGroupRuleLocation`, `SystemUnderTest`, `DiscoverFieldDetailsProps`, `BlockBlobURL`, `IToastAttrs`, `ts.System`, `RepositorySummary`, `LocalDatabase`, `IDocumentService`, `SmsCookie`, `ChannelUser`, `protos.common.SignaturePolicyEnvelope`, `StageData`, `TEX1_Sampler`, `WebDependency`, `CryptoEffectFrom`, `ControlsService`, `RoleService`, `SourceTarget`, `ContentApiService`, `IApi`, `GetMessagesFormatterFn`, `MatchmakerMatched_MatchmakerUser_NumericPropertiesEntry`, `NavigationRoute`, `DoubleMap`, `BeneficiaryUpdateParams`, `IMobileTarget`, `OrigamiControlValueAccessor`, `MeshInstance`, `ZipFileOptions`, `TContainerNode`, `StakingData`, `UserName`, `CalendarHeatmapDataSummary`, `IPerson`, `IFluidHandle`, `WebpackDevServer`, `HttpResponseInternalServerError`, `CodeFile`, `ModMetaData`, `PairsType`, `FeatureSetup`, `LoggerLevel`, `WebBinding`, `ILoggerOptions`, `UISize`, `PointRef`, `PDFOptionList`, `ThumbnailSize`, `ContextParameters`, `IBlockchainQuickPickItem`, `Bangumi`, `BlobItem`, `ServerRequestModel`, `DescribeDatasetRequest`, `IAutocompletionState`, `ClusterSettingsReasonResponse`, `TPT1`, `SerializedTypeNode`, `NamedCurveKeyPair`, `py.ScopeDef`, `ExtendableEvent`, `ApplicationConfig`, `AnalyticUnitId`, `RangeSelector.RangeObject`, `RemoteNode`, `ITests`, `DashboardId`, `MarkdownString`, `ParsedAcceptHeader`, `SettingsProperty`, `ResolvedDependencies`, `CallErrorTarget`, `PLSQLSymbolKind`, `DefaultOptions`, `Shader_t`, `Events.preframe`, `Sequelize`, `Scene`, `ResolvedRoute`, `BeanObserver`, `BackwardRef`, `FilamentSpool`, `IMatrix33`, `WriteableStream`, `EntityRemote`, `ComponentSingleStyleConfig`, `UsersServiceTest`, `OpenOrCloseListener`, `UpdateModelDetails`, `EventPluginContext`, `DataStream`, `INumberColumn`, `EntryModule`, `PrereleaseToken`, `AdaptFuncT`, `FreeStyle`, `TmpfileOptions`, `TestDtoFilter`, `CipherWithIds`, `ControllerEvent`, `SuiModal`, `KVStore`, `AttachmentRequest`, `IGetTimeLogReportInput`, `EventAggregatorService`, `types.UMLClassMember`, `MapMesh`, `Cookie`, `ITree`, `CollisionParts`, `GroupLocalStorage`, `ApplicationCommandData`, `ConsoleService`, `GoogleTagManagerService`, `EntityComponent`, `DIALOG`, `DropdownState`, `SNSNoAuthorizationFault`, `ContactLightweight`, `TodoStore`, `PopoverInitialState`, `PouchdbDocument`, `UniqueSection`, `d.OptimizeJsResult`, `CompletionEntryData`, `ValidationComposite`, `IFindQuery`, `PaginationComponentOptions`, `WalletInfo`, `FileEditActions`, `ScopedDeployment`, `AccessRule`, `PortalManager`, `ListRoomsRequest`, `MessageRecord`, `TransportStream`, `BackgroundColor`, `EthereumSignatory`, `IAgreementConnector`, `ResTable`, `FortaConfig`, `MockWindow`, `OperationGroup`, `NormalItalic`, `AddressService`, `ListDatasetsRequest`, `TilemapData`, `TransactionService`, `OrganizationEmploymentType`, `Getters`, `MalSeq`, `TFile`, `StreamerConfig`, `BattleModel`, `Endianness`, `Armature`, `UpdateArgs`, `RationalArg`, `CharacteristicSetCallback`, `UserProfileFactory`, `OptionInfo`, `FormData`, `Animatable`, `UIColor`, `AnimatorSet`, `StatFilter`, `Codec`, `UpdateResponderRecipeResponderRule`, `JavaScriptRenderer`, `ManagementAgentPluginAggregation`, `GetFreeBalanceStateResult`, `BuildVideosListQueryOptions`, `BindingElement`, `RegionInfoProviderOptions`, `EditableTextBase`, `PageInfoListItem`, `AnnotationsOptions`, `DeviceConfigIndex`, `CategoryService`, `RO`, `DailyApiResponse`, `NodeJS.EventEmitter`, `CompletionStatus`, `Stretch`, `OverpassElement`, `Association`, `SimpleChartDataType`, `DaffCategoryFilterEqualOptionFactory`, `JSON`, `RouteHandlerMethod`, `ListDeviceEventsCommandInput`, `PageLayout`, `FunctionLikeDeclaration`, `CommandPath`, `SQS.Message`, `React.ReactInstance`, `Constructable`, `WifiConfigureRequest`, `ChannelProperties`, `SvelteDocument`, `SecurityHub`, `PredicateWithIndex`, `JsonComposite`, `ListDomainsForPackageCommandInput`, `types.IDynamicOptions`, `BlockCompiler`, `ArgError`, `STPPaymentMethod`, `GoGovReduxState`, `MetricTypeValues`, `ButtonInteraction`, `IPartialLocaleValues`, `BreadcrumbsOptions`, `TabLocation`, `SurveyConfig`, `RequestMethods`, `EngineArgs.DiagnoseMigrationHistoryInput`, `MSITokenResponse`, `ShaderOptions`, `LanguageServiceContainer`, `GetDeploymentResponse`, `Criterion`, `LoggerWithErrors`, `StringArray`, `CreateSchemaCustomizationArgs`, `IPlaylist`, `ParserException`, `DataRequestDescriptor`, `BreadcrumbsNavProps`, `PuppetASTContainerContext`, `MapFunc`, `DefaultVideoStreamIdSet`, `SbbNotificationToastRef`, `ApiMetadata`, `requests.ListIamWorkRequestErrorsRequest`, `HeatmapTable`, `InputData`, `TargetResponderRecipeResponderRule`, `B.JsNode`, `DateRangeKey`, `GeolocationPositionError`, `TaskList`, `EventResponse`, `ARDimensions2D`, `HeaderType`, `ListPageSettings`, `ExpNumNumel`, `IResourceExpression`, `UserInfoInterface`, `IActionsProps`, `WebSocketService`, `MachineConfig`, `RandomNumberGenerator`, `Loadbalancer`, `DesignerLibrary`, `FetchFinishedAction`, `FormatDefinition`, `PluginsAtomType`, `PiPostAction`, `NgModuleDef`, `ConfigContent`, `TemporaryStorage`, `Developer`, `ABI`, `IconifyJSON`, `EventConfig`, `Agent`, `DescribeProjectCommandInput`, `undefined`, `CodeSource`, `RouteChain`, `CreateProcedureOptions`, `UniOption`, `SP`, `BrowserHeaders`, `StagePanelsManagerProps`, `TrackCallInfo`, `ScopeTreeRow`, `AnyEventObject`, `RestService`, `ContextLogger`, `Web3ClientInterface`, `ActionCreatorWithPreparedPayload`, `CompiledSchemasArray`, `JsonDecoratorOptions`, `RRES`, `ts.InterfaceDeclaration`, `RxTerms`, `DescribeJobRunCommandInput`, `SessionIdentifier`, `IGetDeviceResult`, `TargetDefinition`, `LineData`, `OperatorSummary`, `DomController`, `UserCredentialsRepository`, `FaunaString`, `ChainEventSubscriberInterface`, `LegacyOperation`, `FormulaBuilder`, `CombinedItemPriceInfo`, `vscode.StatusBarItem`, `StatedBeanMetaStorage`, `EmojiCategory`, `XYZValuesObject`, `ArrayShape`, `URIComponents`, `VirtualNetworkGatewayConnection`, `IMarkmapFlexTreeItem`, `CodeFixContextBase`, `IYamlItem`, `SMTType`, `IApiCallback`, `Blockly.Block`, `CallbackStore`, `TestErc20`, `AppConfig`, `MinimalFS`, `vscode.Position`, `TestSuiteResult`, `HKTFrom`, `ScrollService`, `ScaleConfig`, `AssetParts`, `IPoContent`, `TooltipController`, `AbstractRule`, `RequestFunction`, `OAuthUser`, `ExtendedIColony`, `IDBRequest`, `OnPreResponseResult`, `Task`, `ts.Declaration`, `InternalStacksContext`, `MaxAnalysisTime`, `esbuild.OnLoadResult`, `AutoAcceptCredential`, `Schematic`, `RequestHandlerContextProvider`, `BalmScripts`, `MessageReader`, `viewEngine_NgModuleRef`, `MatchmakerAdd_StringPropertiesEntry`, `OpDef`, `BindingSetting`, `Hobby`, `SRT0`, `LineShape`, `HTMLTableSectionElement`, `RSASigningParams`, `TypedAxiosRequestConfig`, `MagickSettings`, `t_a25a17de`, `RpcKernel`, `MxObject`, `KeybindingItem`, `DatasourceStates`, `AWS.S3`, `MetadataStorage`, `Chars`, `AllowsNull`, `XMLCharState`, `ColorMode`, `DaffCategoryPageMetadataFactory`, `SwaggerOperation`, `ListObjectsV2Output`, `DefaultDeSerializers`, `CombinedScanResults`, `NameObjRequestArchivesFunc`, `NumberBase`, `IFilterValue`, `IStoreOffer`, `ProviderCallback`, `httpm.HttpClientResponse`, `UseRequestConfig`, `Film`, `HeaderItemProps`, `SkillMap`, `MetadataValue`, `UiGridColumnDirective`, `ThisType`, `ComponentSlotStylesPrepared`, `Express.Request`, `LocalReference`, `RushConfigurationProject`, `ListModelsResponse`, `ShareTransferStorePointer`, `MotorcycleDomSource`, `AngularFireStorageReference`, `Piscina`, `MemoryFileSystem`, `SourceRange`, `SwapTransition`, `ISuperdesk`, `Island`, `SetupCommitment`, `MoneyBase`, `ImportNode`, `AudioFormat`, `PollingPerformanceObserverTaskQueue`, `ISearchLocation`, `QMParam`, `IAnyModelType`, `RuntimeService`, `MetaDataRequest`, `enet.INetEventHandler`, `PointCandidate`, `AudioClip`, `IGlobOptions`, `C_Point`, `types.ScrollData`, `ofAp`, `WizardContext`, `PreviewComponentProps`, `AaiOperationTraitDefinition`, `CompilerSystemRemoveDirectoryResults`, `FilterQueryBuilder`, `TimeSpec`, `MatDateFormats`, `ProcessConfigurationOptions`, `IntBuffer`, `EventToken`, `IHttpConfig`, `DebugProtocol.EvaluateResponse`, `AbstractKeymapData`, `SyncedDataObject`, `CATransform3D`, `TPlayItem`, `ProtoCtx`, `RankingEntry`, `BoxSide`, `AdapterConfig`, `UtilsService`, `DiagnosticSeveritySetting`, `Transform`, `JSDocTag`, `NzResizeObserver`, `DbStxEvent`, `InputMethod`, `SeedReference`, `ILoadBalancer`, `CellArgs`, `RectScrollFixed`, `IDeviceInformation`, `MonacoFile`, `ResolvedReflectiveProvider`, `BufferSize`, `KCDHandle`, `MasterDataObject`, `ResourceSummary`, `IORouter`, `RawTestResult`, `EThree`, `ParseOptions`, `JSCodeshift`, `SyncDoc`, `BlockComponent`, `UnpackNestedValue`, `React.SFC`, `Authorization`, `InputDefinitionBlock`, `SDKConfiguration`, `DeployedReplica`, `PatternPreset`, `FibaroVenetianBlindCCReport`, `GRUCellLayerArgs`, `Grammar`, `ForcedRetryErrorInfo`, `Konva.Layer`, `NodeWithChildren`, `PreProcessedFileInfo`, `AnimationReferenceMetadata`, `MessageFormat`, `RustLog`, `AnnotationsProvider`, `MediaMarshaller`, `CardTypes`, `MemoryWriteStream`, `ResponseTiming`, `ESTree.AssignmentExpression`, `IBpmnModeler`, `Atoms`, `DeployStackResult`, `ts.ParenthesizedExpression`, `PackageDetail`, `ConcreteClass`, `LongestNudgedSegment`, `IMechanicsQuery`, `DiscordEmbed`, `ILabelConfig`, `CloudFunction`, `IRequestApprovalFindInput`, `HoverParams`, `ResultFilter`, `_TypedArrayConstructor`, `FfprobeData`, `IQueryListRendererProps`, `AccuracyEnum`, `ResolvedModule`, `Shake`, `FieldExpr`, `_https.RequestOptions`, `RedisInterface`, `PbEditorElementPluginArgs`, `requests.ListIdentityProviderGroupsRequest`, `PropertySet`, `CreatePhotosDto`, `TNoteData`, `GetAppCommandInput`, `IOperationType`, `requests.ListConsoleHistoriesRequest`, `AssignStatementContext`, `Quat`, `React.Key`, `StarknetWindowObject`, `PrecalculatedBot`, `ResourceGroupXML`, `ViewerContext`, `TOCMenuEntry`, `IStateDB`, `Interpolator`, `ExperimentDocument`, `BlobURL`, `NavResponse`, `CivilTCR`, `ISWATracker`, `SxParserConfig`, `JsState`, `ReadonlyColor`, `TypeGraph`, `HookData`, `LevelLogger`, `HighlightResult`, `ResponsePath`, `ListWorkRequestLogsResponse`, `FrontendApplication`, `Conflicts`, `PayloadActionCreator`, `ReplicationState`, `UIContextProps`, `BotCalculationContext`, `DisplayAccessKey`, `StackPanel`, `ParseError`, `MockERC20`, `CtxAndConfigRepository`, `TokenStream`, `SequentialTaskQueue`, `CardPile`, `ITimer`, `ArrOf`, `shareComicFace`, `ProvidersInfoService`, `requests.GetVolumeBackupPolicyAssetAssignmentRequest`, `dataType`, `DirectiveResult`, `IMidwayFramework`, `thrift.IThriftField`, `FuseNavigationService`, `DocumentId`, `ParentBid`, `PuppetASTObject`, `AnnotationDomainType`, `SelectStep`, `IG6GraphEvent`, `PrimitiveTarget`, `CertificateFilter`, `ApplyPendingMaintenanceActionMessage`, `ToastPackage`, `SecurityScheme`, `AugmentedDiagnostic`, `ProtocolParameters`, `MsgCreateDeployment`, `RpcPeer`, `DaffCategoryFilterRequestEqualFactory`, `AssociationType`, `Diagnostics`, `BaseWatch`, `lsp.Hover`, `NLClassifierOptions`, `FindRelationOptions`, `IHeftJestDataFileJson`, `PublicKey`, `IAdminUser`, `Deploy`, `FontType`, `BinarySet`, `FeaturedSessionsState`, `Bitrise`, `BedrockServiceConfig`, `RestObject`, `DateSegments`, `AAA`, `IconifyIcon`, `mongoose.Error`, `WritableAtom`, `ConfigParams`, `MessageBus`, `EventTracker`, `SearchResultItem`, `ColumnInstance`, `UpdateEndpointCommandInput`, `Reporter`, `DejaPopupAction`, `VisualizationToolbarProps`, `ClientRequest`, `IPredictableSupportCode`, `IntelliJ`, `momentNs.Moment`, `RootThunkResult`, `Finished`, `MediaObject`, `PiUnitDescription`, `Ignore`, `TaskRunner`, `TaskExplorerDefinition`, `IHomeViewState`, `NzMentionComponent`, `bank`, `Dataset`, `MenuIDs`, `android.content.res.Resources`, `SetSettingEntry`, `Polymorphic`, `ODataPagedResult`, `DOMUtils`, `TreeServiceTest`, `CssImportData`, `FIRQuery`, `IStackFrame`, `StandardSkillBuilder`, `ConfigStruct`, `TNode`, `RedirectRequest`, `OPCUAClientOptions`, `TProto`, `GasPriceOracle`, `Boundary`, `IDataContext`, `BaseRequestOptions`, `GetByKeyRequestBuilder`, `CustomInputArgs`, `DraggableStateSnapshot`, `DescribeEventCategoriesMessage`, `DependencyGraph`, `SocketState`, `ChangePasswordInput`, `MXAnimatedIconData`, `DynamoDB`, `Lifecycle`, `HsCoreService`, `MessageOptions`, `TypedAction`, `FormTypes`, `ChangeOptions`, `CombatantTypes`, `PartialAsyncObserver`, `STPPaymentContext`, `SlaveTimelineState`, `iAst`, `MalMap`, `HsLayoutService`, `PriceSpecGroup`, `V1Servicemonitor`, `MockMAL`, `PluginConfigItem`, `SyncType`, `ControllerUIProp`, `CallableFunction`, `TreeChild`, `OctokitResponse`, `FabricNode`, `CircleCollider`, `AxisPositions`, `SettingsSpec`, `ReadonlyAtom`, `DescribeTableCommandInput`, `MimeContent`, `TemplateRoot`, `MonoTypeOperatorFunction`, `Wallet`, `Cast`, `CryptoEffects`, `RequestExt`, `OnBoardConfig`, `ACP.SuggestionsRequestedEvent`, `RegistryItem`, `ReactiveDBException`, `ServiceLocator`, `Extractor`, `IXLSXExtractOptions`, `TeardownLogic`, `MsgHandler`, `IHistoryRecord`, `RuleConfig`, `NodeService`, `PlanValidationOutcome`, `HTMLInputOptions`, `ApiTypes.Feed.Hide`, `SectionMarker`, `ContactDto`, `GeometryValue`, `ReflectionCapabilities`, `ImageScanner`, `MdcTab`, `LinkedAttachment`, `WatchCompilerHostOfConfigFile`, `FM.DllFuncs`, `BaseMsg`, `Disembargo_Context`, `IPropertyComponentProps`, `Keyed`, `IListRecipient`, `WorkspaceSeed`, `IColours`, `BatchDeleteImageCommandInput`, `painless_parserListener`, `BackgroundFilterVideoFrameProcessorObserver`, `GroupLevel`, `ListJobTemplatesCommandInput`, `NimAppState`, `OutliningSpan`, `ActionStepType`, `UiSettingsCommon`, `HTTP`, `IProxyContext`, `SiteVariablesPrepared`, `KafkaSettings`, `AbstractSqlPlatform`, `React.StatelessComponent`, `MessengerClient`, `ISharedContent`, `SpaceBonus.PLANT`, `HandlerStep`, `BiquadFilterNode`, `SavedComments`, `FacsimileStorage`, `Artefact`, `PlaceholderEmbeddableFactory`, `SourceFileStructure`, `NewE2EPageOptions`, `CreatureType`, `SeparableConvLayerArgs`, `AssetTotal`, `TaskFunctionCallback`, `DescribeRecipeCommandInput`, `UnstakeValidatorV1`, `ColliderComponent`, `Highcharts.VMLRenderer`, `UIEvents`, `BaseTable`, `RenderedChunk`, `Dic`, `CanActivate`, `ModuleID`, `XYCoord`, `Constraint2DSW`, `ObjectAny`, `IBounds`, `WaitForOptions`, `Designer`, `ScrollDirection`, `TBigNumber`, `requests.ListTagDefaultsRequest`, `MssPackage`, `AssignmentExpression`, `CompactdState`, `CallState`, `IActionSet`, `BigNumberValue`, `NodeScene`, `UiActionsStart`, `HttpBatchLinkHandler`, `IChildNode`, `unchanged.WithHandler`, `PointOptions`, `ExternalModuleReference`, `ApmConfiguration`, `ValueGetterFunction`, `FormatDiagnosticsHost`, `GpuState`, `SharedMap`, `Iteratee`, `ICustomizations`, `VideoComponent`, `fabric.Image`, `StyleFunctionProps`, `DidKey`, `StripePaymentIntent`, `GfxRenderPipelineP_WebGPU`, `LabExecutionService`, `ClientService`, `ContractJSON`, `RequestSuccessCallbackResult`, `AcceptInvitationCommandInput`, `CISource`, `requests.ListWafRequestsRequest`, `ConfirmHandlerCallback`, `ImagePickerResult`, `CustomOptions`, `MatTooltipDefaultOptions`, `SPHttpClient`, `CollectionDefinition`, `NamedNode`, `MutationInput`, `Params`, `AppointmentUnwrappedGroup`, `UninstallMessage`, `TemplateItem`, `GameSagaContextPlayers`, `LanguagePackage`, `NovaResources`, `FirstColumnPadCalculator`, `HTTPHeaders`, `MappingBuilder`, `CircuitDesigner`, `GuildBasedChannel`, `RectGrid`, `InstancedBufferAttribute`, `BasicTypeDefinition`, `DSColumnType`, `SpectatorDirective`, `MIRConceptType`, `MovimientosService`, `RollupSourceMap`, `IPod`, `ParsedRequestParams`, `AssemblyBlockContext`, `ListDevicesCommandInput`, `SpendingCondition`, `IDBObjectStore`, `d.RollupAssetResult`, `NodeIndex`, `RegExpOne`, `ScryptedInterfaceProperty`, `PendingTestFunction`, `NodeFilter`, `AppVersion`, `IFramework`, `IBApi`, `ODataVersion`, `TerminalVersion`, `BaseEntity`, `ParseFn`, `ApiSection`, `IPage`, `IPolygonGeometry`, `IKeyResultUpdate`, `SubState`, `WriteLeaderboardRecordRequest_LeaderboardRecordWrite`, `TelemetryNotificationsConstructor`, `UniqueSelectionDispatcher`, `SweepContour`, `IExtractedCode`, `ResponderExecutionModes`, `TupleAssignmentNode`, `EffectHandler`, `ir.Expr`, `MDCMenuSurfaceFoundation`, `IOObject`, `PubSub`, `AbortIncompleteMultipartUpload`, `HdStellarPaymentsConfig`, `ManyToManyOptions`, `RequestSchema`, `OncoprintWebGLCellView`, `request.Options`, `ConditionType`, `FunctionPlotDatum`, `BatchItem`, `ServicePropertiesModel`, `Amount`, `NextApiRequest`, `SerializeCxt`, `EdmxActionImport`, `DefaultClient`, `esbuild.Metafile`, `IFlowItem`, `React.WheelEvent`, `AnyColumn`, `HopeElement`, `SitePropsIndex`, `IWorkerMessage`, `VElement`, `ManyToOneOptions`, `ApplicationCollection`, `StylingFunction`, `SecurityIdentity`, `StringSymbolWriter`, `core.PathDescription`, `NLUOptions`, `RowViewModelFactory`, `AutoTranslateGoogleService`, `DateEntry`, `IPage.IRequest`, `LF`, `TriggerInternal`, `LiveActor`, `DeploymentTargetsOperationIO`, `FlagFixPoints`, `NotesRange`, `MdcElementPropertyObserver`, `ISensorProps`, `IPluginModule`, `Animal`, `NumberSet`, `CratePackage`, `InitialOptionsTsJest`, `FindResult`, `TagValueParser`, `ProcessedFile`, `TransformHelper`, `Proposal`, `StopChannelCommandInput`, `ColorChannelControl`, `ApiItemReference`, `CreateJobCommandOutput`, `NoInputAndNoOutputCommandInput`, `SweepEvent`, `ChildComponent`, `ModuleScope`, `LogCorrelationContext`, `BaseData`, `IListQueryInput`, `FlowNodeTypeResult`, `INodeCredentialsDetails`, `LibraryComponentImpl`, `NgGridRawPosition`, `CharacteristicType`, `messages.PickleStep`, `PatternParams`, `MbLayer`, `DialogButton`, `FetchCache`, `ISimpleAction`, `GetModelsCommandInput`, `FakeHTMLAnchorElement`, `MockPeerCertificate`, `IntSet`, `ToRefs`, `UIImageRenderingMode`, `TextDocumentWillSaveEvent`, `GetMetricDataCommandInput`, `AsyncIterableObservable`, `OAuthProvider`, `FRAME_SVG_POLYLINE`, `PaginatedQueryFetchPolicy`, `ReXer`, `Binding`, `SubscribeParams`, `BehaviorTreeNodeInterface`, `Room`, `MetadataAccessor`, `MapContext`, `AdjacentZones`, `Get`, `ts.NewExpression`, `i18n.IcuPlaceholder`, `ICommandDefinition`, `IKeyboard`, `Checkpoints`, `VerifiedHierarchy`, `EllipseEditOptions`, `tl.VariableInfo`, `AccountInfo`, `PluginResourceSettings`, `GesturesObserver`, `ethers.providers.FallbackProvider`, `ICandidateUpdateInput`, `ApplyAssetContext`, `UiButton`, `AnnotationState`, `NineZoneStagePanelPaneManager`, `CollisionScaleType`, `Graph2`, `SpatialCache`, `ThyDragHandleDirective`, `DataTableEntry`, `Finding`, `JavaDownloadRelease`, `PluginDescriptor`, `CreateResolversArgs`, `Timestamped`, `interfaces.Factory`, `MessagePort`, `MintGenerativeData`, `HydrateCacheOptions`, `PadCalculator`, `ElementType`, `teacher`, `TernarySearchTreeNode`, `Setter`, `OutgoingResponse`, `RunnerOption`, `ChatClientState`, `HTMLUListElement`, `PutRequest`, `WorkerManager`, `IRandom`, `ReadFileFailCallbackResult`, `PrimaryExpression`, `ShareTransferStore`, `BufferComposer`, `RenegotiationIndication`, `WebSiteManagementClient`, `SMTEntityDecl`, `DatatableColumn`, `PassThrough`, `Algebra.RootNode`, `ShipPlugin`, `EventService`, `styleProps`, `ExpressionLoader`, `FrameworkConfiguration`, `DeleteParams`, `FormatTypeFlags`, `TypesModule`, `ICurrency`, `Right`, `SimpleRecordInput`, `IHealthCheckResult`, `ParsedItem`, `ECInstancesNodeKey`, `DeleteConnectionResponse`, `Vector`, `ModuleSpecifier`, `EvaluatedScript`, `SslSupport`, `IonRouterOutlet`, `apid.VideoFileId`, `RGB`, `SnackbarAction`, `EdmxParameter`, `HistogramBucketAggDependencies`, `TextureSourceLoader`, `requests.ListDrgsRequest`, `ILocalValue`, `SetupContext`, `DialogflowMiddleware`, `MatDialog`, `FastifyInstance`, `TypeParameter`, `CanvasSystemIcon`, `BooleanLike`, `UiState`, `Unlisten`, `SVGDOMElement`, `MinMaxNormArgs`, `SecurityContext`, `FormBuilder`, `CommonWalletOptions`, `ICXMakeOffer`, `_PresignUploadRequest`, `IAllExecuteFunctions`, `d.PrerenderHydrateOptions`, `DataConverter.Type`, `p5.Vector`, `CallAdapterState`, `StructFieldInfo`, `InitConfig`, `SharedControlConfig`, `KeyValueCollection`, `ListServicePipelineProvisionedResourcesCommandInput`, `Bid`, `SVInt`, `coreClient.RawResponseCallback`, `DeclarativeCall`, `SFCStyleBlock`, `ISettingAttribute`, `ShapeViewModel`, `CreateMeetingCommandInput`, `Init`, `SupCore.RemoteClient`, `StaticDataView`, `DOMRectList`, `UpgradePolicy`, `DescriptorTypeResult`, `FindProsemirrorNodeResult`, `TreeOption`, `DebugProtocol.DisconnectResponse`, `GraphicOptions`, `IntCodeComputer`, `Vector3d`, `SpriteSheetSpacingDimensions`, `SelectListItem`, `IPermission`, `File`, `DefUse`, `XMLNode`, `TestLegacyLoanToken2`, `DynamicFormControlEvent`, `CompressedPixelFormat`, `TypeBase`, `ThemeBuilder`, `FormActionType`, `test.Test`, `requests.ListDrgAttachmentsRequest`, `IEthUnlock`, `RefreshTokenDto`, `HsvColor`, `Quakeml`, `Bound`, `AccountDoc`, `TranslateService`, `WizardForm`, `ClientUser`, `ReactSource`, `UniformRandom`, `AbiOwnershipTransaction`, `RTCSessionDescriptionInit`, `InterpolationStep`, `NodeResult`, `IParameter`, `JsonRpcPayload`, `Dialogic.State`, `BubleDataType`, `ResourceProps`, `PNG`, `AppHandler`, `ImageTileEnt`, `AudioProcessingEvent`, `SymmetricCryptoKey`, `XPCOM.nsXPCComponents_Classes`, `LitecoinSignedTransaction`, `GitLabFixtureClient`, `IUserWithRoles`, `ISourceNode`, `DefaultContentNode`, `DemoVariable`, `ObjectLiteralElementLike`, `NdArray`, `ValueProvider`, `MainState`, `CannonRigidbody3D`, `SchemaEnum`, `SaleorThemeColors`, `ISpecModel`, `paper.PointText`, `EuiValues`, `UrlType`, `UniqPrimitiveCollection`, `JwtHelperService`, `IFormItemTemplate`, `ContainerWithState`, `Instance_t`, `RheaEventContext`, `Geoset`, `ZoneLocator`, `TransactionContext`, `DsProcessorService`, `CredentialTypesClass`, `IAtomMvhd`, `ListViewCell`, `ScrollStrategy`, `Args`, `RedisCacheAdapterContext`, `DescribeEnvironmentsCommandInput`, `BeanProvider`, `ASTVisitor`, `ArticleEntity`, `ChatMessageWithStatus`, `VersionDataService`, `bigInt.BigInteger`, `SubstrateExtrinsic`, `Mocked`, `IsEqual`, `Signal`, `ICell`, `IFetchParams`, `UriMatchType`, `AdditionalDetailsProps`, `NbDialogRef`, `PosBin`, `NzTreeNode`, `DescribeDBParametersCommandInput`, `DBArg`, `Credential`, `GrantIdentifier`, `d.DevServerContext`, `UnitFormService`, `MUserWithNotificationSetting`, `Mysql`, `ParseConfigFileHost`, `PvsProofCommand`, `UnsubscribeSnapshot`, `DeleteFleetCommandInput`, `requests.ListSteeringPolicyAttachmentsRequest`, `AnimatedAddition`, `AttachmentOptions`, `TransferListOptionBase`, `AlertController`, `StaticJsonRpcProvider`, `EndPoint`, `PaginatedList`, `TaskOptions`, `LibraryItem`, `ThunkResult`, `StringMap`, `SavedMultisigEntry`, `CancelRequest`, `NavigationViews`, `SpaceID`, `FILTERS`, `ResponderDimension`, `ProductCategory`, `ExcaliburGraphicsContext`, `LoadedVertexLayout`, `TransferState`, `GetConfigurationSetCommandInput`, `btVector3`, `OperationInstance`, `NetworkType.Mainnet`, `IAlbum`, `ControlledProp`, `BooleanResponse`, `CSSProps`, `CreateTodoDto`, `FilterExpressionNode`, `GetPrTimelinePayload`, `RPC`, `ArgResult`, `IMenu`, `EntityEffects`, `MessageHandlerContext`, `DaffAddress`, `TickSignal`, `UpdatePipelineCommandInput`, `TSInterfaceDeclaration`, `PythonPreview`, `Conv2DTranspose`, `ILogParseHooks`, `HitCircle`, `USBDevice`, `Hint`, `PrivateApiImpl`, `TimetableSession`, `TeslaStyleSolarPowerCard`, `SRWebSocket`, `sdk.VoiceProfileClient`, `IDocumentSnapshot`, `DataValue`, `ScannedPolymerProperty`, `CreateApplicationCommand`, `EmbeddablePersistableStateService`, `TiledTSXResource`, `TableService`, `KeyInKeyof`, `TableConfig`, `MongoClientConfig`, `SchemaNode`, `PivotQuery`, `ChatCompositeProps`, `IThemedToken`, `ReviewerEntity`, `TaskTypeDictionary`, `IsometricGraphic`, `TestRaster`, `EnvironmentRecord`, `Compute`, `CompilerOptions`, `RobotsByNameMap`, `ProtocolPeerInfo`, `VoilaGridStackWidget`, `ThyDialog`, `HeaderTransformer`, `bigInteger.BigInteger`, `JupyterFrontEnd`, `CalculateBoundsOptions`, `ApplicationCommandRequest`, `WexBimGeometryModel`, `CSR`, `A8`, `PerpV2Fixture`, `DocumentRange`, `common.EventData`, `ITool`, `TestDTO`, `KMSKey`, `OscillationState`, `IColorV`, `TypeVarMap`, `DeltaType`, `NotSkeletonDeep`, `DeleteStudioCommandInput`, `StringAttribute`, `SankeyDiagramDataView`, `VnetInfoResource`, `TriDiagonalSystem`, `DemoBlockType`, `ProxyAccessor`, `DecimalSource`, `RecordModel`, `CommitData`, `BetterMap`, `IThrottlerResponse`, `ManagerConfig`, `DemoExecutor`, `F1TelemetryClient`, `ts.Diagnostic`, `AuthMachineEvents`, `StateAccount`, `GridItemEvent`, `AllDecorators`, `RenderedItem`, `StatusFieldProps`, `TEElement`, `Models.IPositionStats`, `DirectoryTree`, `DaffCartAddress`, `IProducer`, `OperationArgs`, `ForwardDefinition`, `DocumentWrapper`, `Alternatives`, `IPlDocTemplate`, `RouterNavigation`, `HoveredResult`, `SetStateAction`, `ListenDecorator`, `Activation`, `EncryptedMessage`, `CreateChannelParams`, `ColumnConfigArg`, `ISharedObjectRegistry`, `SuperResolutionConfig`, `EfsMetricChange`, `ComponentVariablesPrepared`, `LendingReserve`, `PutFileContent`, `ImportedConfiguration`, `IPNResponse`, `SelectionOptions`, `IRECAPIClient`, `ResourceSpans`, `WriteLeaderboardRecordRequest`, `IFormSectionData`, `SetupFunction`, `TestTree`, `TokenFetchOptionsEx`, `IC`, `NineStar`, `VercelRequest`, `Physics2DDirectBodyStateSW`, `ElasticsearchClient`, `IMetric`, `CdkStepper`, `_OIDCConfiguration`, `authors.Table`, `BrowseDescriptionLike`, `Assignment`, `TestComponentBuilder`, `PlayerInstant`, `RegionType`, `EntitiesState`, `ResourceNotFound`, `HsAddDataOwsService`, `HttpErrorContext`, `AbstractMesh`, `ApplicationSummary`, `MatSidenav`, `DescribeRegionsCommandInput`, `AllTokens`, `Drawer`, `PolicyBuilderPaths`, `VisEventToTrigger`, `VNodeData`, `CallbackList`, `MatSort`, `FlattenedXmlMapCommandInput`, `DiscordMessageActionRow`, `ConditionalTransferCreatedEventData`, `SavedObjectMetaData`, `XUL.chromeDocument`, `DslQuery`, `MonitoringContext`, `HmrContext`, `SimpleUnaryOperation`, `TransactionParams`, `HMAC`, `ChildProcessByStdio`, `ResponseToolkit`, `IChangeTarget`, `CohortRepresentation`, `TypeConditionRestriction`, `ReshapePackedProgram`, `DBCoreTable`, `ILibraryRootState`, `PolicyViolation`, `LightGroupState`, `DaffCartOrderReducerState`, `Authenticate`, `TPluginSettingsProps`, `SingleKey`, `TextRenderParameters`, `TensorLike1D`, `IPed`, `SidePanelProps`, `SubmissionObjectEntry`, `GetArgs`, `TypeErrors`, `TSchema`, `MaxPooling2D`, `LiteralValue`, `AnnotationObject`, `IListenerRuleCondition`, `Events.pointerdragenter`, `XUL.browser`, `EnumValueDescriptorProto`, `Tlistener`, `ModelCtor`, `OperationNotPermittedException`, `ComicDownloaderService`, `AggsItem`, `IRoundState`, `DataRequestContext`, `QueryConstraint`, `TreeView`, `DockerImageName`, `FargateTaskDefinition`, `RenderableProps`, `SpellList`, `AudioMixObserver`, `IFunctionTemplate`, `NodeCache`, `WidgetAdapter`, `AssetState`, `OrderInfo`, `TraceCallback`, `DeleteObjectRequest`, `AttributeFlags`, `DownloadProgress`, `SlsConsoleFile`, `ReplyPackage`, `IMediatorMapper`, `VirtualWAN`, `Evaluation.Response`, `Replacement`, `MbMap`, `CreateAppInstanceUserCommandInput`, `BooleanSchema`, `IntermediateTranslationFormat`, `CLIOptions`, `LegendData`, `DpcMgr`, `IncompleteFormatStringSegment`, `ASTValidationContext`, `AnchorMode`, `SimpleState`, `AutoSuggestData`, `PathEndCoordinates`, `ManagedBackupShortTermRetentionPolicy`, `WebVRSystem`, `EngineResults.SchemaPush`, `BasicPoint`, `PatientService`, `StringInput`, `ForgotPasswordAccountsValidationResult`, `Villain`, `HTMLSuperTabButtonElement`, `UICollectionViewCell`, `InfraConfigYaml`, `FunctionInfo`, `IBrew`, `MyWindow`, `PicassoConfig`, `RouterTask`, `FlexLength`, `EPeopleRegistryState`, `Clickable`, `GetBranchCommandInput`, `DecimalPipe`, `SetIpAddressTypeCommandInput`, `ResourceDataGridWrapper`, `FunctionTemplate`, `IMatrix44`, `TransactionError`, `Docfy`, `sdk.BotFrameworkConfig`, `UpdateRoomCommandInput`, `collectSignaturesParams`, `Favor`, `ReplaySubject`, `InterfaceWithoutReturnValue`, `IStyleCollection`, `HsAddDataLayerDescriptor`, `DynamicFurParam`, `requests.ListVmClusterPatchesRequest`, `IncompleteUnescapedString`, `IPeer`, `SortedSetStructure`, `IHud`, `TreeNodesType`, `requests.ListManagementAgentPluginsRequest`, `GpuStats`, `FabricEventBase`, `CookiecordClient`, `ResultInfo`, `OrgEntityPoliciesPlan`, `DisposableCollection`, `ImageViewerState`, `IParseResult`, `SwaggerPath`, `DescribeAccountLimitsCommandInput`, `AutoRestExtension`, `NotificationEvent`, `ProviderProps`, `B0`, `TransformKey`, `WithElement`, `TEventType`, `ContractsService`, `Q.IPromise`, `ISO8601Date`, `MActorSignature`, `QueryObject`, `VPosition`, `SubstrateNetworkParams`, `Breakpoints`, `PathTransformer`, `ValidateArgTypeParams`, `BufferFormatter`, `CryptoFunctionService`, `MeasureStyle`, `vscode.WebviewView`, `RuleDescription`, `KeyEvent`, `DropInfo`, `VertexData`, `cdk.Construct`, `GfxRenderPipeline`, `AnimationCurveKeyframe`, `VirtualCell`, `socketIO.Server`, `EventProperties`, `A11ySettings`, `requests.ListCatalogPrivateEndpointsRequest`, `RenderPassId`, `IImportedArmy`, `Replacer`, `LayerId`, `UITextField`, `FormatErrorMetadata`, `PathfindingGraph`, `SQLError`, `S1Sale`, `ITimezoneOption`, `SavedReport`, `Node`, `JwtVerifier`, `ToastyService`, `YearToDateProgressPresetModel`, `IndexPatternField`, `AxisStyle`, `ethers.ContractTransaction`, `DashboardContainerFactory`, `IDateRangeActivityFilter`, `ResultPath`, `TriggerProps`, `ThLeftExpr`, `LengthPrefixedList`, `DragManager`, `LocationState`, `ServerSideEncryptionConfiguration`, `AnalysisEnvironment`, `ShadowTarget`, `ParticleSystem`, `ComposibleValidatable`, `GrowableBuffer`, `SinglelineTextLayout`, `CategoryCollectionStub`, `ResourceAction`, `displayCtrl.IInitConfig`, `MembersActions`, `React.Dispatch`, `DocumentSnapshot`, `DeploymentType`, `ProjectServer`, `Setup`, `TimedParagraphItem`, `AccountMeta`, `ast.ClassDeclaration`, `SessionAuthService`, `Symmetry`, `CVDocument`, `TypographyOptions`, `ContextCarrier`, `SecurityCCCommandEncapsulation`, `LayerProperties`, `CbExecutionContext`, `Id64String`, `Mod`, `Remarkable`, `DaffAccountRegistrationFactory`, `Decoder`, `TaskLibAnswers`, `FieldAccessor`, `GetOwnPropertyDescriptors`, `SurveyLogicItem`, `FileWatcherEventKind`, `SemVer`, `InjectionValues`, `INavFourProp`, `Original`, `BadgeSampleProps`, `PluginInitializer`, `InterpreterOptions`, `PiInterface`, `NavigableMap`, `ObstaclePort`, `WriteItem`, `NodeContext`, `TiledLayer`, `Grid3D`, `requests.ListListingsRequest`, `TrackList`, `ScaffoldType`, `ITagUi`, `TypeFacts`, `LoginEntity`, `UpdateArticle`, `ParsedCode`, `DescribeDataSourceCommandInput`, `sst.App`, `RelativePath`, `SerializedPrimaryKeyOptions`, `ClusterProvider`, `UISettingsStorage`, `IndexedMap`, `Resolver`, `ForwardingStatus`, `fpc__ProcessName`, `msRest.OperationQueryParameter`, `ng.IHttpProvider`, `OverflowT`, `PageBoundingBox`, `PostModel`, `TEasingFn`, `AnalyzerLSPConverter`, `DBCore`, `JavaMethod`, `VisualizeTopNavProps`, `IAccessInfo`, `MappingFactor`, `EndOfLineState`, `GatsbyConfig`, `ListDevicesRequest`, `PrivateProps`, `ICommandParsed`, `MarkerClustererOptions`, `SelectAmount`, `ChatConverseState`, `StringStream`, `Repl`, `ActionByType`, `ProtocolMessage`, `InputField`, `GfxBufferUsage`, `TweetResolvable`, `Bin`, `Mjolnir`, `NodePath`, `GrainPlayer`, `ColorService`, `NgOption`, `tr.commands.Command`, `CustomEndpointDetails`, `SecureCookieOptions`, `BarChartBarMesh`, `VectorSource`, `WorkingDirectoryInfo`, `ExampleRecord`, `ControlFormItemSpec`, `AlgPartDecoration`, `Fields`, `TestImageProps`, `DeviceFormPostData`, `CSharpNamespace`, `ApplicationTypes`, `EmailConfirmation`, `LoadDataParams`, `RequestAction`, `VirgilCrypto`, `ImportLookup`, `Attributes`, `HydrateComponent`, `PromiseSettledResult`, `PartitionLayout`, `EmbeddableFactoryProvider`, `ICnChar`, `IMinemeldConfigService`, `SyntaxCheck`, `FileSystemError`, `StringContent`, `Piece`, `HypermergeNodeKey`, `IDriverType`, `MDCSelectFoundation`, `GitAPI`, `AuthenticationStrategy`, `egret.TouchEvent`, `Utilities`, `TemplatePositionContext`, `MDCTextFieldLabelAdapter`, `StitchesComponentWithAutoCompleteForJSXElements`, `Hermes`, `CidConfig`, `IEventListener`, `UITabBarItem`, `SafeHTMLElement`, `ExternalData`, `Ptr`, `JitsiRemoteTrack`, `GameModel`, `RadixAtomObservation`, `MDCShadowLayer`, `ObserverActionType`, `NotifyOptions`, `GfxRenderHelper`, `ParsedUrl`, `NonNullExpression`, `messages.PickleStepArgument`, `VariableTable`, `TestStepResult`, `RelationInfo`, `UniversalCookies.Options`, `Addressable`, `SocketCustom`, `ColorRef`, `KeyframeNodeList`, `ShaderData`, `LCDClient`, `PartyPromote`, `LoggerConfigType`, `EasJsonReader`, `NameValuePair`, `FileSet`, `AssemblerQueryService`, `ApiMockRoute`, `RadioButtonComponent`, `NodeItem`, `GroupProblemData`, `ElementContainer`, `IParser`, `MovieDAO`, `VersionInfo`, `BackendContext`, `Stack.Props`, `TokenData`, `ApiEditorUser`, `CircuitMetadataBuilder`, `LSTMState`, `KeyRowEvent`, `changeCallback`, `MathBackendCPU`, `Voice`, `ParsedCommand`, `SetAccessorDeclaration`, `CLI_COMMAND_GROUP`, `NormalizedOutputOptions`, `OverviewSourceRow`, `PluginBuilderLens`, `CommandRole`, `XhrFactory`, `PresetMiniOptions`, `ElementInlineStyle`, `Merger`, `DAL.DEVICE_ID_RADIO_DATA_READY`, `SecuredSubFeature`, `DeleteEmailIdentityCommandInput`, `RawBuilder`, `VueFilePayload`, `cheerio.Cheerio`, `BindingContext`, `RenderableStylesheet`, `NodeMap`, `ShareArgs`, `SearchInterceptorDeps`, `ProperLayeredGraph`, `NodeTag`, `Coder`, `LocationItem`, `ListWorkflowsCommandInput`, `RollupCommonJSOptions`, `RepoError`, `DiagnosticChangedEventListner`, `PresSlide`, `LodashDecorator`, `CreateStudioCommandInput`, `PreviouslyResolved`, `TimeType`, `ChromeExtensionService`, `WindowComponent`, `LocationService`, `_THREE.Vector3`, `DatabaseSchemaImpl`, `WeightsManifestGroupConfig`, `SearchSourceOptions`, `AppServiceRegistration`, `APIGatewayEvent`, `LineProps`, `StoredTx`, `ExecFileException`, `HunspellFactory`, `CancelJobRequest`, `pxt.PackageConfig`, `SeparationInfo`, `TimeRangeBounds`, `StoreBase`, `RevalidateEvent`, `TrackEvent`, `ConcatInputs`, `d.Diagnostic`, `sst.StackProps`, `MappedField`, `PredicatePlugin`, `RemoteEndpointConfiguration`, `JobConfig`, `IndicesOptions`, `fetch.Response`, `CoreTypes.dip`, `PeopleSearchScroller`, `types.IActionInputs`, `LintMessage`, `HTMLLIElement`, `StudentBasic`, `IUserUpdateInput`, `GenerateFunctionOptions`, `NativeAppStorage`, `Mixin`, `JsxExpression`, `DataViewsService`, `SpringSequenceStep`, `ServiceConfig`, `GXMaterialHacks`, `PluralType`, `ISqlRow`, `NativeEventEmitter`, `Multicall`, `ListPatternType`, `IslandsByPath`, `xyDatum`, `FirebaseMachineLearningError`, `CustomConfigurationProvider1`, `ITextAndBadge`, `FlagValue`, `RecoilValue`, `KvPair`, `MapEntry`, `TaskOperations`, `SchemaFormOptions`, `SchemaDefinition`, `ICAL_ATTENDEE_STATUS`, `GroupPanel`, `TestArgs`, `FluidObjectSymbolProvider`, `NodePoolPlacementConfigDetails`, `FieldDescriptorProto`, `ReadableFilesystem`, `IEmeraldVault`, `ResolverMethodOpts`, `DocumentRef`, `DeviceConfigService`, `IFieldInfo`, `EventQueue`, `XyzaColor`, `RuleAttribute`, `FSMState`, `UpdateApplicationRequest`, `CoinSelectInput`, `ITableOfContents`, `TourStep`, `AlternativesSchema`, `UsageCollectionSetup`, `ClassPartObject`, `TransferEvent`, `BehaviorTreeBuilder`, `RtlScrollAxisType`, `Circle`, `Revision`, `JsonaProperty`, `TooltipAndHighlightedGeoms`, `SessionID`, `IUserIdentity`, `CasCommand`, `OptionElement`, `TraverseOptions`, `AzureWizard`, `EditableHippodrome`, `IValues`, `TransitionSettings`, `MyEpic`, `SubscribeFunction`, `VariablesManager`, `ApplicationSubmission`, `DeleteReportDefinitionCommandInput`, `MtxGroup`, `DataHolder`, `CKB`, `IBinaryTreeNode`, `KeyboardLabelLang`, `BoardService`, `GfxTexFilterMode`, `ReflectedType`, `Streamer`, `BuilderState`, `RpcSocket`, `Wechaty`, `CardRenderer`, `GetDomainRecordsRequest`, `SessionId`, `EntityId`, `ParsedSearchParams`, `CurrentUser`, `cback`, `RolandV8HDConfiguration`, `IRange`, `DataModifier`, `SpringSequence`, `SavedState`, `IIdentifier`, `ScryptedDevice`, `GraphQLRequestEnvelope`, `ExecutionArgs`, `IReq`, `TimeService`, `_TimerCondition`, `CustomEditor`, `ReactTestInstance`, `NoExtraProps`, `DateType`, `ThStmt`, `patch_obj`, `schema.Entity`, `CheckFunc`, `PlanetApplication`, `IParameterValuesSource`, `ForumAction`, `ITriggerPayload`, `FireClient`, `MemoryStream`, `BleepGeneric`, `IEventDispatcher`, `DeleteJobRequest`, `PiEditor`, `LastValueIndexPatternColumn`, `BitcoinCashBalanceMonitorConfig`, `requests.CreateProjectRequest`, `IDockerComposeOptions`, `WastePerDay`, `IProviderOptions`, `NullLogger`, `NpmConfig`, `EventsClientConfiguration`, `Brand`, `IEscalation`, `StatefulDeviceManager`, `SearchFilterState`, `WarriorLoader`, `AppStateStore`, `AccountRefresh_VarsEntry`, `ITicks`, `ESTreeNode`, `LMapper`, `ConfigDefinition`, `RumPublicApi`, `ITimeLogFilters`, `IApolloServerContext`, `TerminalProviderSeverity`, `UseSRTP`, `ApiAction`, `USBInTransferResult`, `StoreState`, `BandViewModel`, `CLI`, `LogAnalyticsSourceExtendedFieldDefinition`, `NormalizedOption`, `NewExpression`, `ConfigBundle`, `FormContextValues`, `MockElectron`, `PortRecord`, `EtcdOptions`, `MacroMap`, `VerifierOptions`, `HexLiteralNode`, `BluetoothError`, `LessParser`, `ConfigSetExecutionGroup`, `PathlessInputOperation`, `CliFlags`, `Tracker`, `EnvironmentAliases`, `RedisTestEntity`, `NugetPackage`, `GroupNode`, `ProgressCb`, `ViewTest`, `JsxAttributeLike`, `MotionValue`, `ToggledFiltersState`, `HTMLTextAreaElement`, `CSSInterpolation`, `ISendingUser`, `MockBackend`, `AST.ArrayAST`, `RTCIceCandidate`, `HierarchyRequestOptions`, `IFileEntry`, `HTMLAudio`, `DefaultEditorSideBarProps`, `AuthSettings`, `TodoListApplication`, `UseStore`, `ExecAsyncResult`, `IFilePane`, `IResultSetValue`, `ServiceConfigDescriptor`, `FiltersBucketAggDependencies`, `NotificationRepository`, `FindTaskQuery`, `ComparisonFunction`, `NetworkEndpointType`, `glTF.glTFNode`, `FacemeshConfig`, `RedisModules`, `UseFilterManagerProps`, `FormatRange`, `FontVersion`, `OutputSelector`, `TestTerminal`, `DeployedWithoutEmailWallet`, `GraphState`, `IDynamicValues`, `IResolvers`, `OutputStream`, `obj`, `INavigationFeature`, `LegendSpec`, `RelayerUnderTest`, `ThemeCreator`, `RecipientAmountCsvParser`, `IThunkAction`, `StatusBarItemsManager`, `RawConfigFile`, `ThemeLoadOptions`, `HEventType`, `CodedError`, `ComponentNode`, `SavedQueryService`, `Harmony`, `IExecutableContext`, `TableOfContents`, `MdDialogConfig`, `DocumentedError`, `DynamicAlternative`, `JPattern`, `StringKeyOf`, `MutationFn`, `DiezTypeMetadata`, `ReadValue`, `UIGestureRecognizer`, `CommandControlMessage`, `GuaribasAnswer`, `StyleMapping`, `SystemRequirement`, `EventDetails`, `UnionAccumulator`, `TypeOrmModuleOptions`, `IInsertInput`, `MiddlewareOptions`, `CardSpace`, `SongBundle`, `SlideElement`, `PropertyMap`, `DiscoverServices`, `IChangedArgs`, `KxxRecordBalance`, `VirtualFileInterface`, `DateRangePickerProps`, `SvgDebugWriter`, `social.InstancePath`, `DayGridWrapper`, `ManagerOptions`, `RemoteArtifact`, `SerializationStructure`, `LayoutRectangle`, `Border`, `GetIntegrationResponseCommandInput`, `GPGPUContext`, `BigDecimal`, `UpdateUserCommand`, `ComputedRef`, `IFormFieldValue`, `PluginStreamAction`, `AbstractViewer`, `DeleteRegexPatternSetCommandInput`, `PushTransactionArgs`, `JobMessage`, `SpaceFilter`, `AsyncPipeline`, `StringLiteralUnion`, `PrefetchIterator`, `QueryError`, `SearchConfigurationService`, `TaskManagerDoc`, `SAXParser`, `BlockNumberUpdater`, `MouseUpEvent`, `AnimatorRef`, `KPuzzle`, `HelmetData`, `ToggleProps`, `SentPacket`, `WebApiConfig`, `oicq.Client`, `CombinedText`, `IAutorestLogger`, `util.StringEncoding`, `logger.Logger`, `transcodeTarget`, `T`, `GithubClient`, `IFileBrowserFactory`, `alt.Vehicle`, `UseMetaStateOptions`, `OptionalIdStorable`, `ModuleJSON`, `IMaterialPbrMetallicRoughness`, `RuleScope`, `OutputBundle`, `ISubsObject`, `SortOptions`, `ProxyController`, `DownloadedBinary`, `IJSONSchema`, `AlignValue`, `OnEventCallback`, `SignKeyPair`, `OutputTargetDistLazyLoader`, `OptionType`, `ISharedMap`, `SCondition`, `IDocumentAttributes`, `pxtc.CompileResult`, `ConditionalTransaction`, `DownloadedFiles`, `TargetGraphQLType`, `IOneClickAppIdentifier`, `ControllerHandlerReturnType`, `EMailProcessingStatus`, `ComponentOpts`, `NgModuleFactory`, `AjvFactory`, `MediaDescription`, `UVSelect`, `PublishData`, `OperationSupportMatrix`, `ServerError`, `IStyledComponent`, `ExperimentSnapshotDocument`, `DiezType`, `ChooseActionStateMachine`, `LED`, `StopJobCommandInput`, `ArgsDescriptions`, `ToolbarTheme`, `ContractCaller`, `NormalizedTxBitcoin`, `GenericParameter`, `ESTree.Node`, `CacheObject`, `CompilerWorkerTask`, `XPlace`, `HandleActionSharedParams`, `JobRun`, `CdkStep`, `PlacementOptions`, `ReviewComment`, `Org`, `ImportPath`, `ReferenceCallback`, `BlockBlobClient`, `CustomPropertySetUsage`, `ResourceTimeGridWrapper`, `DriveManagerContract`, `CatchupToLatestShareResult`, `firebase.User`, `WithIndex`, `CancelTokenSource`, `CategoriaProps`, `NbThemeService`, `ISuite`, `BasicPizzasProvider`, `AuthenticationType`, `FlamelinkFactory`, `NumberSystemType`, `ImageStore`, `LuminanceSource`, `WithNode`, `S3Location`, `RtspSession`, `AttributeValue`, `Teacher`, `IZ64Main`, `FieldQuery`, `ts.TypeReference`, `LimitLine`, `ICamera`, `PubRelease`, `GetInfoResult`, `MockImportRegistry`, `IChannelDB`, `ApiRecord`, `CardListItemType`, `TemplateDeserialized`, `GitPullRequest`, `IGitManager`, `TypeDeclaration`, `NSVElement`, `WebDNNWebGPUContext`, `SankeyDiagramSettings`, `GridItemData`, `UdpTransport`, `TempStats`, `GridOptions`, `LogParser`, `CldFactory`, `VideoStreamRendererViewState`, `RowContext`, `FileLocationQuery`, `GoalSettingsService`, `ParjsResult`, `ITokenRequestOptions`, `CellRange`, `SmallMultipleScales`, `StepResultGenerator`, `WithPromise`, `ts.server.PluginCreateInfo`, `TextBox`, `ProjTreeItem`, `glTF1`, `Checkout`, `NavigationContainer`, `PathExpression`, `ProtoKeyType`, `ConstantsService`, `IQueryFeaturesOptions`, `InitiateAuthResponse`, `TextureSource`, `GX_Material.GXMaterialHacks`, `VisualizeServices`, `model.TypeName`, `DragDropRegistry`, `SortKeyRule`, `PTestNode`, `IKeyCombo`, `ScanResult`, `TranslationItem`, `BinaryOperatorToken`, `TypeIdentifier`, `ECDH`, `uint32`, `StyleBuilder`, `InlineVariable`, `OperationTypeNode`, `PTPDataView`, `SingleValueProps`, `GX.BlendMode`, `MutableImageRef`, `VariantAnnotationSummary`, `CatService`, `StatsNode`, `HALLink`, `BaseSourceMapTransformer`, `UserTie`, `FakePromise`, `LiteralShape`, `RSAPublicKey`, `SendMessageFn`, `ImageUrlTransformationBuilder`, `PromiseRes`, `Applicative4`, `ApplyGlobalFilterActionContext`, `TimefilterConfig`, `IPageProps`, `MessengerTypes.Message`, `UploadResult`, `requests.SearchListingsRequest`, `DoorLockCCConfigurationSet`, `GenericTestContext`, `AppManager`, `GameState`, `ImportMode`, `ViewCommon`, `Crawler`, `LastFmApi`, `pingResponse`, `ISearchStrategy`, `IPositionCapable`, `WirelessMode`, `ResultSet`, `LogConfig`, `SortedArray`, `PluginName`, `JsCodeShift`, `ITaskContainer`, `ObjectContaining`, `SizeConfig`, `BodyDefinition`, `MkDirOptions`, `CycleDimension`, `GetInstanceProfileCommandInput`, `IPluginData`, `DeleteStorageObjectId`, `Occurrence`, `RequestDetailsProps`, `ColorPickerService`, `GraphQLInputFieldMap`, `PointStyleAccessor`, `TypeConstructionContext`, `ITeam`, `TensorWithState`, `Web3Service`, `SubmitHandler`, `io.Socket`, `AreaUI`, `ListEndpointOptions`, `ListMemberAccountsCommandInput`, `IModule`, `Box3`, `CPlusPlusRenderer`, `RawNavigateToItem`, `ShareCallbackFunction`, `CompositeMetric`, `AtomicAssetsContext`, `LabelStyle`, `MultiRingBuffer`, `GetExperimentCommandInput`, `QueryList`, `CreateBucketRequest`, `CustomDomComponent`, `Monad1`, `requests.ListIdentityProvidersRequest`, `WordCache`, `GLTFResource`, `AuditAssertion`, `SongData`, `UserProps`, `UUID`, `RouterLoaderOptions`, `ESTree.MethodDefinition`, `HistoricalEntityData`, `KyselyPlugin`, `CompletionItemData`, `SchemaField`, `EffectDef`, `Array`, `IHDPreviewState`, `FileFlatNode`, `HierarchyDefinition`, `ActionsSdkConversation`, `IAppVolume`, `SoundChannel`, `MockCustomElementRegistry`, `UpdateVolumeCommandInput`, `ModalContextProps`, `RigConfig`, `MyItem`, `ICoordinates3d`, `BindingHelpers`, `CLM.ExtractResponse`, `CommandClassDeserializationOptions`, `Worker`, `DataTransferEvent`, `CubeTexture`, `DeployProxyOptions`, `ErrorMark`, `GeoPath`, `IndexedCollectionInterval`, `DataVariable`, `Truffle.Contract`, `Tasks`, `AccountConfig`, `TimelineTrack`, `Event24Core`, `Renderable`, `GanttUpper`, `SteeringPolicyAnswer`, `API.IMiscInfos`, `ChannelBytes`, `CompletionInfo`, `DownloadedImage`, `GridStackModel`, `CfnExperimentTemplate`, `ValidatorResult`, `InstanceConfiguration`, `TestClassesVariant`, `LanguageClient`, `HOC`, `Tipset`, `GX_VtxDesc`, `ApiTreeItem`, `AlreadyExistsException`, `SSM`, `Offer`, `OptionDetails`, `protocol.FileLocationRequest`, `PubkeyInfo`, `InjectorContext`, `IInspectorRow`, `ImagePipe`, `LineBatch`, `AlertResult`, `InputRule`, `EditorState`, `MiddlewareConsumer`, `JsPsych`, `SandDance.VegaDeckGl.types.VegaBase`, `AuthUtilsService`, `JacksonError`, `GeneratePrivateKey`, `DiagramMaker`, `ConstantAst`, `Angulartics2IBMDigitalAnalytics`, `SystemModule`, `RuleGroup`, `DateFormatterFn`, `MatchPairsOptions`, `UtilityNumberValue`, `WorldObject`, `CreditCard`, `MessagesBag`, `Buff`, `FileOverwriteOptions`, `ScanOptions`, `StringFormat`, `AccessLog`, `ExpoConfig`, `requests.ListDrgRouteTablesRequest`, `X12FunctionalGroup`, `USBEndpoint`, `integer`, `HotkeysEvent`, `Getter`, `TestInstance`, `SafeSignature`, `DefaultEditorControlsProps`, `BuildingState`, `PaginateQuery`, `Mongoose.Model`, `StringLookupMap`, `CssDimValue`, `WindowsLayout`, `ButtonDefinition`, `DoneFn`, `GroupId`, `EngineArgs.SchemaPush`, `CreateTestRendererParams`, `LegacyCompilerContext`, `ServerSyncBufferState`, `AppAPI`, `SnackbarContext`, `Knex.Config`, `EventActionHandlerCallableState`, `ToastButton`, `IEmployeeStatisticsHistoryFindInput`, `GfxMipFilterMode`, `InteractionEvent`, `BeforeInstallPromptEvent`, `IOdataAnnotations`, `PointerType`, `NavLink`, `Export`, `GlobalUserData`, `Octant`, `StandardEvents`, `ValuesStoreParams`, `RequestMatcher`, `RPCClient`, `MotionData`, `DataProxyErrorInfo`, `MessageAction`, `IComponentComposite`, `ReportBuilder`, `HashConstructor`, `GitTag`, `LinkComponent`, `AssertionError`, `IPropertyTypeValueDescriptor`, `IModelAppOptions`, `SortingOption`, `DocumentHighlight`, `GooglePlus`, `ECClass`, `ReadModelMetadata`, `IGetPaymentInput`, `MediaUploadForm`, `ISerialFormat`, `SurfaceLightmapData`, `ArticleProps`, `JobTrigger`, `AbstractValue`, `PixelLineSprite3D`, `TransitionSpec`, `MDL0_NodeEntry`, `PreQuestInstance`, `SceneStore`, `Compartment`, `WithId`, `FactRecord`, `BitBuffer`, `ThroughputSettingsGetResults`, `JsDocAndComment`, `ACLCanType`, `ControllerData`, `IViewPort`, `ActionDefinition`, `PageBlobGetPageRangesResponse`, `ApplicationEventData`, `PrivateApi`, `ReactiveEffectRunner`, `AutoScalingConfigurationSummary`, `BSPBoxActor`, `IEnhancer`, `StackGroupPath`, `ScalarType`, `InputConfig`, `Consultant`, `ContentDimensions`, `Arg0`, `AudioPlayer`, `ConnectionUI`, `UpdateFn`, `BackgroundPageStyles`, `ARUIViewOptions`, `DefaultInspectorAdapters`, `SettingsOptions`, `BindingInfo`, `RollingFileContext`, `ItemSection`, `HttpResponseBase`, `JsonRpcParams`, `ListPicker`, `DynamoDB.BatchGetItemInput`, `FileSystemResolver`, `InvariantContext`, `RemoteResource`, `IKeypair`, `DeviceConfigIndexEntry`, `BottomNavigationItem`, `CommentStateTree`, `BatchResponse`, `CustomLocale`, `GeolocationPosition`, `Pin`, `IRestApiResponse`, `NPCActorItem`, `HierarchyNode`, `FenceContext`, `CanvasView`, `TReturnType`, `ISelectOption`, `ITokenParser`, `FaunaTime`, `FlowTreeTopicNode`, `ReactPDF.Style`, `LiteralReprAll`, `ModelState`, `KeysToCamelCase`, `BoardDoc`, `IntrospectionInputValue`, `TabInstance`, `DispatchByProps`, `WglScene`, `CountService`, `ColorSwitchCCGet`, `IWinstonData`, `Wins.RankState`, `SparseMatrix`, `KeyBindingProps`, `MagitChange`, `ReportingCsvPanelAction`, `ScaledUnit`, `ICommands`, `HsAddDataUrlService`, `d.CollectionManifest`, `AnalyzerEntry`, `TabItemSpec`, `SankeyLink`, `textViewModule.TextView`, `CharacterClass`, `GeometryContainmentRequestProps`, `CanvasEngine`, `VcsAccountDatabase`, `getSubAdapterType`, `WidgetDef`, `IDataFilterInternal`, `BinanceWebsocket`, `ModelPrivate`, `VisConfig`, `BluetoothScale`, `TypeHierarchyItem`, `requests.ListVolumesRequest`, `StoredEncryptedWallet`, `NoteResouce`, `MultipleClassDeclaration`, `ConvCommitMsg`, `BufferContainer`, `ArrayBindingPattern`, `FileFormat`, `ArrayObserver`, `LogAnalyticsLabelDefinition`, `OafService`, `ZoneState`, `IExpressServerOptions`, `BUTTON_SIZE`, `AccessoryTypes`, `ConsumerContext`, `UpdateServiceRequest`, `ColumnSettings`, `ITransformHooks`, `RealtimeChannelInfo`, `ContentTypeProps`, `IDocumentSystemMessage`, `TreemapSeriesData`, `ChangesetProps`, `MaterialInstanceState`, `MarkType`, `SprottyWebview`, `NineZoneState`, `SVGPathElement`, `Curl`, `CostMetric`, `ISequence`, `DispatchPattern`, `Arithmetic`, `AccountBalanceService`, `HistoryStore.Context`, `DeterministicDeploymentInfo`, `ITranslationService`, `FakeUsersRepository`, `AnimeDetailsFields`, `ApmSystem`, `ITextFieldProps`, `Dashboard`, `EdmxFunctionImportV4`, `NormalExp`, `SurveyObjectProperty`, `LintConfig`, `Linker`, `VisualGroup`, `op`, `InputChangeEvent`, `B3`, `InvalidRequestException`, `VideoStreamRendererView`, `GraphFrame`, `SonarrSettings`, `WrappedComponentType`, `RangePartType`, `OpenAPIObject`, `CreateAccountsRequestMessage`, `Arena`, `AwsService`, `DateAdapter`, `App.webRequest.IRequestMemory`, `PortalWorldObject`, `Paragraph`, `SequentialLogMatcher`, `IPagingTableColumn`, `PluginLoader`, `DesignerNode`, `IterableExt`, `GraphQlQuery`, `VarSymbol`, `A1`, `SignerFetchRpc`, `GenerateTypeOptions`, `PoseNetOperatipnParams`, `Camera`, `CardActionConfig`, `IBlocksFeature`, `SupEngine.Actor`, `AsciiOperatipnParams`, `ISetupFunction`, `TabsState`, `DecodedRouteMode`, `PositionObject`, `Yendor.BSPNode`, `d.TypesImportData`, `IHttpClientOptions`, `ProgressOptions`, `SwaggerBaseConfig`, `ObservableInput`, `TestRenderer`, `WorkloadType`, `ExercisePlan`, `TypeEnvironment`, `K7`, `LanguageMode`, `PromiseAndCancel`, `BlockExport`, `ChatClient`, `FetchHeaders`, `MemberName`, `VideoStreamDescription`, `InMemoryCache`, `PolicyDetails`, `ProblemIndication`, `OutputOptions`, `BertNLClassifierOptions`, `IncomingWebhookSendArguments`, `PROTOCOL_STEPS_ID`, `IObserverHandle`, `MagicExtensionError`, `ValueID`, `SliderOpt`, `OpenFileFilter`, `ReportTarget`, `DocumentNode`, `UpdateConnectionResponse`, `TransformedData`, `CurrencyService`, `DebtRegistryEntry`, `RateLimitState`, `AbstractOptions`, `CornerMarker`, `PipeDef`, `UITransform`, `SchedulerPromiseValue`, `EntityTypeProperty`, `CommitOrderCalculator`, `ResizeGripResizeArgs`, `Observable`, `Group.Point`, `IDeploymentTemplate`, `RoundingModeType`, `DataPublicPluginSetup`, `Inventory`, `ILinkedClient`, `MeetingParticipant`, `CircularLinkedListNode`, `PoiTable`, `ILoaderOptionsPipe`, `vec2.VectorArray`, `AnalyticsDispatcher`, `ConfigurationProps`, `MasternodeBlock`, `AggDescriptor`, `TransactionalFileSystem`, `SwiftVirtualNetwork`, `ElementKind`, `BackupData`, `PageComponent`, `KinesisFirehoseDestination`, `DataModel`, `CallResult`, `InputProps`, `TemplateParameters`, `PreferenceChange`, `FunctionFiber`, `vscode.CodeLens`, `ToastData`, `IContractWrapper`, `d.ResolveModuleIdOptions`, `IEcsTargetGroupMapping`, `OutputTargetStats`, `SFUISchemaItemRun`, `SearchResults`, `IGLTFNode`, `StyleProp`, `CredentialRecord`, `__HeaderBag`, `AsyncOrderedIterable`, `TupleTypeNode`, `NVM3Page`, `requests.ListTaggingWorkRequestLogsRequest`, `UserLoginResource`, `EventListeners`, `IChannelSigner`, `SetValue`, `Global`, `TarTransform`, `IResultTab`, `MagicSDKAdditionalConfiguration`, `GetUserCommandInput`, `ErrorAlertOptionType`, `FakePrometheusClient`, `ANodeExpr`, `TDataGroup`, `Discord.TextChannel`, `BubbleChartData`, `T4`, `requests.ListMetricsRequest`, `btCollisionObject`, `Party`, `RemoteData`, `MorphTargetManager`, `NzGraphDataDef`, `DukDvalueMsg`, `PaneProps`, `RuntimeError`, `RemoteTokenCryptoService`, `DomEventArg`, `OOPTypeDecl`, `SelectCard`, `QueryParamsType`, `ParseTreeMatch`, `TriggerPosition`, `ManagementClient`, `ReferenceRenderHandler`, `ProjectMetadata`, `ModuleWithProviders`, `HttpException`, `KeyType.rho`, `TheoryItem`, `GraphLayoutType`, `EventDescriptor`, `DeleteOrganizationCommandInput`, `FactoryIndex`, `ReactHarness`, `IResolvedQuery`, `JobChannelLink`, `SequenceNumber`, `TextureType`, `ThunkType`, `ProcessRepresentationChainModifier`, `AppThunkDispatch`, `PyteaService`, `DetachPolicyCommandInput`, `_resolve.AsyncOpts`, `PrunerConfig`, `ObservedDocument`, `TrackFormat`, `DependencyGraphNodeSchema`, `IPermissionSearchFilters`, `DomainItems`, `MapSearchCategory`, `CreateProcedureWithInput`, `CanaryAnalysisConfiguration`, `DeleteLeaderboardRecordRequest`, `GradientVelocity`, `SpringRequest`, `GetWrongDependenciesParams`, `StateAccessor`, `JestProcessRequest`, `TemplateListItem`, `LoggerFunction`, `MessageBundle`, `IEvent`, `IdentityProvider`, `SteemiaProvider`, `FieldsService`, `AssertionLevel`, `ContactConstraintPoint`, `Tweenable`, `UpdateProfileRequest`, `WaterfallChartData`, `gameObject.Battery`, `GfxCoalescedBuffersCombo`, `MerchantGoodsEntity`, `ICUToken`, `MatchDataSend`, `LogWrapper`, `XlsxService`, `MetadataClient`, `VisualizeAppStateTransitions`, `WorkDoneProgressReporter`, `RequestOptionsArgs`, `React.AnimationEvent`, `DraggableLocation`, `extendedPingOptions`, `CurveType`, `Ranking`, `FilePropertyReader`, `IdentifiedReference`, `ExitCode`, `ParameterListDetails`, `IEntrypoint`, `SnsDestination`, `IEntityRef`, `YDefinedFn`, `AbortMultipartUploadCommandInput`, `MockUser`, `JsonRpcError`, `AlignmentTypes`, `BaseCursor`, `StateChangeEvent`, `ColumnDefinition`, `SavedObjectsUpdateObjectsSpacesOptions`, `Alias`, `TTree`, `GraphicsGroup`, `AwaitedMessageEntry`, `i18n.Placeholder`, `GenericObject`, `NotifyParams`, `RematchDispatch`, `MockHTMLElement`, `OnboardingItem`, `Bip32Path`, `TKeyboardShortcut`, `CallEndReasons`, `ListRange`, `monaco.CancellationToken`, `StreamLabs`, `CSSDocument`, `KeyResultService`, `SObject`, `SavedObjectsClientCommonFindArgs`, `TransformOriginAnchorPosition`, `vscode.ProviderResult`, `ITokenModel`, `SteeemActionsProvider`, `EvmNetworkConfig`, `Snackbar`, `IArray`, `CustomAction`, `MenuInner`, `ComputedShapeParams`, `AppDispatch`, `OutputTargetCustom`, `Container3D`, `DataSourceItem`, `ExprContext`, `EventRegisterer`, `RARC.RARCFile`, `TLinkedSeries`, `RoutingTable`, `Credit`, `V0RulesService`, `QueryCapture`, `LitParser`, `FieldSetting`, `RolesEnum`, `ColorBlindnessMode`, `JsonObject`, `STAT`, `NetworkRequest`, `ICompileProvider`, `CurlCode`, `HeaderMapType`, `ConstructorDeclaration`, `ImportOptions`, `ApiScope`, `GLuint`, `AccountFilterData`, `EditDialogData`, `ClassNameStates`, `NodeHeaders`, `PredictableSupportCode`, `LoadingEvent`, `algosdk.Transaction`, `IDocumentInfo`, `TestMaskComponent`, `GlobalPooling2DLayerArgs`, `AdminService`, `IImageExtended`, `FilterizrOptions`, `RobotStateAndWarnings`, `ServiceErrorType`, `NativeActivation`, `PersistedLogOptions`, `FeedbackShowOptions`, `DynamicTreeCollisionProcessor`, `n`, `CalendarProps`, `GunNode`, `SubMesh`, `PlayerBattle`, `UpgradeSchemeWrapper`, `ICustomData`, `ODataModelEntry`, `CreateSessionCommand`, `Kernel.IOptions`, `MicrosoftComputeExtensionsVirtualMachineScaleSetsExtensionsProperties`, `VertexAttributeDefinition`, `UseTournamentRoundsState`, `LambdaHandler`, `SavedSearchTypes`, `NgZone`, `ElementFactory`, `ManagementAgentGroupBy`, `StatedFieldMeta`, `ScalingPolicy`, `DateProfile`, `Input.Gamepad`, `RequestConfiguration`, `BaseImageryMap`, `GithubConfiguration`, `Position`, `FunctionTypeFlags`, `Extra`, `FunctionMutability`, `BreakStatement`, `ClanAggHistoryEntry`, `InternalConfig`, `TestProvider`, `ts.ConditionalExpression`, `d.ResolveModuleIdResults`, `Backup`, `CommandInstance`, `BaseSymbolReference`, `ChangeType`, `IGrid`, `InterfaceWithConstructSignature`, `IIconSubset`, `Converter`, `UtilConvertor`, `BaseIncrementOptions`, `InterpolationPart`, `PatchRequest`, `EnumerateVisualObjectInstancesOptions`, `IExchange`, `FtpNode`, `UnregisteredAccount`, `MediaQueryList`, `FModel.LoadSettings`, `IFoo`, `IORouterRegistry`, `BitBucketCloudPRDSL`, `StyleMapLayerSettings`, `ICtx`, `Jimp.Jimp`, `ServerPlatform`, `M3ModelInstance`, `OAuthScope`, `ICoordinateData`, `GradientSize`, `IDejaDropEvent`, `TypeBuilder`, `Display`, `NormalizationHandler`, `CodeModDefinition`, `TranslationsType`, `UdpTally`, `FigmaPaint`, `DocMetadata`, `SfdxFalconProject`, `IWithComputed`, `DocumentHighlightParams`, `UIWaterStorage`, `TestEmitter`, `SimpleTemplateRunner`, `OptionsInterface`, `ResolveReferenceFn`, `PotentialEdgeInfo`, `PDFDict`, `IMapItem`, `IHsl`, `InternalModifiers`, `EndpointOptions`, `OrderData`, `BasePackageInfo`, `ExtendedCompleteItem`, `LayoutConfigJson`, `WriteBatch`, `SingleSelectionHandler`, `LoaderFn`, `IProperty`, `SuggestionsService`, `FinalizeHandlerArguments`, `SpecConfiguration`, `OPENSEARCH_FIELD_TYPES`, `DocumentRequest`, `SysMenu`, `IDataSlice`, `DataSourceSettings`, `CyclicTimeFrame`, `IPipeFn`, `VertexEvent`, `ISolutionExplorerService`, `mssql.config`, `RulesPosition`, `Injector`, `jsmap`, `EditProps`, `vscode.MarkdownString`, `BabelChain`, `ExperimentInterface`, `StackResult`, `CalculatedIndicatorValues`, `GridMaterial`, `WheelEvent`, `sdk.SpeechConfig`, `Float`, `ActiveDescendantKeyManager`, `PDFPageTree`, `VcsItemRef`, `MetaTagState`, `AuthPipe`, `p5ex.p5exClass`, `XTheme`, `HostKind`, `TagValidation`, `SyncProtocol`, `IpcMainInvokeEvent`, `CreateSecurityProfileCommandInput`, `M3Model`, `IRuleSpec`, `TProviders`, `RegionService`, `IHttp`, `ActorPath`, `ActionReducerMap`, `IRegisteredPlugin`, `JointComponent`, `FormattedBuilderEntry`, `CreateFunctionCommandInput`, `NzNotificationDataOptions`, `cxapi.CloudFormationStackArtifact`, `K5`, `RuleCatalog`, `SelectionInterpreter`, `IVorbisPicture`, `MenuComponent`, `GasModePage`, `ScalarsEnumsHash`, `FormOutput`, `SubmitKey`, `L1L2`, `BFBBProgramDef`, `Ship`, `GetDeploymentCommandInput`, `VLIEOffset`, `CapDescriptor`, `CodeActionProvider`, `Gif`, `TOut`, `LocationChangeListener`, `DataEventEmitter.EventCallback`, `DaffContactState`, `TArray`, `IAMCPCommand`, `LngLatAlt`, `OrganizationEditStore`, `StateMachineTargets`, `ISite`, `AnimatorChildRef`, `PlasmicTagOrComponent`, `ObjectQuery`, `MStreamingPlaylist`, `AsyncHierarchyIterable`, `TFolder`, `GovernorOptions`, `TableSelectionArea`, `TransactionWalletOperation`, `ConsoleExpression`, `TopicSubscription`, `ManifestCacheProjectAddedEvent`, `TargetLayoutNode`, `KeycodeCompositionFactory`, `IColor`, `RequestHeader`, `Ticker`, `Preference`, `RemoteFileItem`, `ExpressionListContext`, `HassEntities`, `ts.Statement`, `PromiseResult`, `NoticeService`, `DescribeReservedElasticsearchInstanceOfferingsCommandInput`, `UiActionsSetup`, `ThemesDataType`, `DeleteRoomRequest`, `ComponentFramework.Context`, `ChildMessage`, `yubo.RecordOptions`, `PluginCreateOptions`, `TextPlacement`, `StyleGenerator`, `DeploymentFileMapping`, `fixResults`, `DebeBackend`, `CalendarViewEvent`, `ContactService`, `ExtensionProps`, `VaultOptions`, `CommentDocument`, `GluegunCommand`, `ScopedObjectContextDef`, `AbiEvent`, `postcss.Root`, `PropertyMeta`, `ShuftiproKycResult`, `PDFRef`, `Dot`, `MultiChannelAssociationCC`, `ChatThreadPropertiesUpdatedEvent`, `FullLink`, `INgWidgetSize`, `KeysData`, `sdk.SpeechRecognitionResult`, `DisplayProcessor`, `ContentObserver`, `Generator`, `DeprecatedButtonProps`, `MutableControlFlowEnd`, `MoneyAmount`, `CreateProcedureWithoutInput`, `RegisterCertificateCommandInput`, `ICommandWithRaw`, `i128`, `BaseView`, `RelativeFunction`, `ExactC`, `IConstruct`, `VerticalAlignment`, `DatePicker`, `RuleFilter`, `TradeHistoryAccount`, `RetryOptions`, `Scoreboard`, `QueryData`, `RegisteredServiceAttributeFilter`, `MatSelectChange`, `VariableDefinitionContext`, `ListNodegroupsCommandInput`, `Parser.ASTNode`, `CurrentItemProps`, `StyledTextNode`, `TableListParams`, `WordcloudSpec`, `GridTile`, `IPageInfo`, `MongoCommand`, `ParentType`, `IAssetSearchParams`, `RuntimeExtensionMajorVersions`, `PadchatMessagePayload`, `RenderContext`, `ListRegexPatternSetsCommandInput`, `TransformationContext`, `JsonBuilder`, `OutfResource`, `RoomTerrain`, `PrivateCollectionsRoutes`, `MatchHandler`, `PartialValues`, `CLM.Condition`, `CompositeGeneratorNode`, `requests.ListVolumeAttachmentsRequest`, `Paging`, `Identifiable`, `PongMessage`, `IHomebridgeUiFormHelper`, `GraphQLEnumValue`, `Coords3D`, `Blockly.WorkspaceSvg`, `IGitResult`, `GridModel`, `RequestBodyObject`, `DAVObject`, `FullIconCustomisations`, `Additions`, `StyleIR`, `Extension`, `RxLang`, `SourceData`, `NoteRepository`, `OrganizationAccount`, `PointMesh`, `FeatureModule`, `RequestEntry`, `SyntheticPointerEvent`, `SearchOption`, `ChainableComponent`, `SnapshotFragmentMap`, `CreateGatewayCommandInput`, `DigestCommandOptions`, `TorrentDAO`, `BaseDataOptionType`, `ScopeNamer`, `ExclusiveDrawerPluginConstructor`, `ProjectUploader`, `ThemeStore`, `RegistrationService`, `SyncService`, `AdaptContext`, `SimulateOptions`, `ChannelResource`, `SortColumn`, `UIClass`, `DocBlockKeyValue`, `WindowModule`, `ISkillInfo`, `ImageItem`, `ExplicitPadding`, `SDKError`, `TokenPricesService`, `ActionDefinitions`, `MapControls`, `SingleOrArray`, `IEstimation`, `GfxCompareMode`, `Re_Exemplar`, `ConnectionCallback`, `MergedCrudOptions`, `WaterInfo`, `IXMLFile`, `CloudBuildClient`, `SwaggerDocument`, `BoxOptions`, `ParsedQs`, `GeoSearchFeature`, `MegalodonInterface`, `PutConfigurationSetDeliveryOptionsCommandInput`, `ISharePointSearchQuery`, `BasicCCGet`, `DriverContext`, `TrackByFunction`, `InboundStream`, `QueryObjOpts`, `DaffCartItemFactory`, `TreeviewFlatNode`, `EncodingType`, `Seconds`, `ResourceConfig`, `DiscoverPlugin`, `RouterStub`, `Interceptor`, `AdminJS`, `LocationSource`, `ExternalSubtitlesFile`, `LSTMCell`, `HttpInterceptord`, `STData`, `RepoOptions`, `Sources`, `ISignaler`, `UpdateBuilder`, `ParseCssResults`, `Seam`, `IEntityOwnership`, `ImageLike`, `BSplineSurface3dH`, `SfdxFalconError`, `ValueFormatterParams`, `GamepadButton`, `AmmConfig`, `ProjectModel`, `TypographyVariant`, `Automerge.Diff`, `QuestionMapType`, `ChildProcess.ChildProcess`, `WechatMiniprogram.CanvasContext`, `ListPublicKeysCommandInput`, `LiteralLikeNode`, `BuddyBuild`, `IAmAnotherExportedWithEqual`, `PrismaClientValidationError`, `TilePathGroup`, `ListStreamsRequest`, `RegExp`, `TransactionResponseItem`, `DeleteApplicationReferenceDataSourceCommandInput`, `ComponentInstruction`, `requests.ListSoftwareSourcePackagesRequest`, `UsedSelectors`, `PyJsonDict`, `TPT1AnimationEntry`, `BaseOperation`, `ResolvableCodeLens`, `DLabel`, `IndentedWriter`, `Session.IModel`, `AnkiOperationSet`, `MatrixModel`, `Explanation`, `CheckoutAction`, `StackLineData`, `ExportNodeProperties`, `AccessPolicy`, `IReCaptchaInstance`, `Relation`, `SecurityPluginSetup`, `requests.ListQuickPicksRequest`, `PubGroup`, `SettingName`, `PrincipalPermissions`, `MapType`, `IParentNode`, `AddressInfo`, `ParameterMetadata`, `IGarbageCollectionState`, `THREE.WebGLCapabilities`, `NoiseServer`, `ChangeAuthMode`, `MdastNodeMapType`, `PiExpression`, `HierarchyCircularNode`, `server.IConnection`, `apid.GetRecordedOption`, `IProject`, `BasicDataPropertyForAdvice`, `i.PackageInfo`, `CW20Addr`, `UsageInfo`, `SessionKeySupplier`, `LayerNormalization`, `ComponentDefinition`, `GetDetailRowDataParams`, `DemoteGroupUsersRequest`, `OrOptions`, `AdaptMountedPrimitiveElement`, `HttpsCallable`, `UseMutationReturn`, `ShowProgressService`, `SlotTreeItemBase`, `EveError`, `LineElement`, `CraftDOMEvent`, `IconElement`, `DOMHighResTimeStamp`, `AutoScalingConfiguration`, `RequirementFn`, `GetLifecyclePolicyCommandInput`, `SFC`, `LinearScale`, `ElementRefs`, `ComputerPlayer`, `GenericDispatch`, `ButtonStyle`, `TagAttributes`, `Syntax`, `puppeteer.ElementHandle`, `ListWorkRequestErrorsResponse`, `TestActions`, `Substream`, `UsableDeclaration`, `WowContext`, `FactReference`, `ExecutionError`, `MemoryShortUrlStorage`, `UseHydrateCache`, `YamlMapping`, `LabelUI`, `IDateRangeInputState`, `DescribeJobLogItemsCommandInput`, `TypeScriptType`, `TransactionHash`, `KeyAgreement`, `Marble`, `ServerErrorResponse`, `CephPoint`, `fhir.Patient`, `ImagePreviewProps`, `IReference`, `Mine`, `JOB_STATE`, `WalletTreeItem`, `FrontstageDef`, `CopySource`, `ControllerFactory`, `PoolMode`, `Gravity`, `ContextConfig`, `OpenSearchSearchHit`, `InsightInfo`, `UnescapedString`, `MessageFileType`, `PartyAccept`, `UpdateLaunchConfigurationCommandInput`, `ManifestActivity`, `StorageKey`, `MonsterProps`, `MigrationOptions`, `ParserFactory`, `DiscoverFieldProps`, `RegisterReq`, `HttpSetup`, `Knex.QueryBuilder`, `FrequencySet`, `ClassOrFunctionOrVariableDeclaration`, `MapperForType`, `SvelteComponentDev`, `DirectionalLight`, `IJwtPayload`, `TimePickerComponentState`, `BrowserController`, `RSAKeyPair`, `MetricFilter`, `t.Errors`, `FixedTermLoanAgency`, `MySQLClient`, `ZoomState`, `Twilio`, `MessageState`, `CompilationParams`, `InteractionStore`, `Pets`, `Q`, `KeyframeAnimation`, `DeviceVintage`, `FieldValues`, `ServerClosure`, `TypeAssertion`, `GeometryPartProps`, `TableCellProps`, `TagResourceResponse`, `PropertyOperation`, `WWA`, `NodeTypesType`, `AddTagsCommand`, `FindProjectQuery`, `TypedMutation`, `SavedVisState`, `ResourceConfiguration`, `Http3ReceivingControlStream`, `FileDto`, `DatosService`, `IAssetInfo`, `IndexPatternSpec`, `AnimationBuilder`, `HistoryAction`, `IQuickeyOptions`, `ThyButtonType`, `JsonSchema.JSONSchema`, `DOMAPI`, `ActionFactory`, `ListIdentityProvidersCommandInput`, `DescData`, `D`, `ModuleResolutionCache`, `TriplesecDecryptSignature`, `MapboxMarker`, `IWorkflowExecuteHooks`, `ResponsiveAction`, `WorkerAccessor`, `NumericF`, `HSD_TExpList`, `ArgSchemaOrRolesOrSummaryOrOpt`, `HostFileInformation`, `OpenSearchDashboardsReactContextValue`, `OutputParametricSelector`, `DbTx`, `TActor`, `Creep`, `BitstreamFormatDataService`, `WorkerMainController`, `MsgUpdateProvider`, `ITableColumn`, `IResolveResult`, `Particle`, `IFluidSerializer`, `HighlightSpan`, `VerifierConfig`, `CommerceTypes.ProductQuery`, `FeatureSource`, `CommandLinePart`, `ColorSwitchCCSet`, `OutputTargetDistGlobalStyles`, `BufferAttribute`, `ExtendedHttpTestServer`, `DescribeAlgorithmCommandInput`, `Dialogue.Argv`, `AccountEntity`, `InspectorEvents`, `SuiAccordionPanel`, `Margin`, `Span_Link`, `TransitionState`, `ILocationProvider`, `BaseInput`, `BaseRender`, `Patterns`, `IOrganizationContact`, `IconItem`, `SArray`, `TwitchServiceConfig`, `ISmsOptions`, `IErrorPositionCapable`, `BirthdayService`, `Matrix3`, `WorkspaceEntry`, `ReadonlyUint8Array`, `RequestHandler0`, `StyProg`, `AggArgs`, `CollidableLine`, `FilePathStore`, `d.OutputTargetDistLazyLoader`, `EnhancementRegistryDefinition`, `MountAppended`, `TSelectActionOperation`, `Impl`, `DeploymentNetwork`, `NextApiHandler`, `SelectBaseProps`, `Runtime.MessageSender`, `OrderFormItem`, `Foxx.Request`, `ValueOrFunction`, `AdministratorName`, `PeriodKey`, `DbObject`, `ExtendedSettingsDescriptionValueJson`, `SourceEntity`, `TestAwsKmsMrkAwareSymmetricDiscoveryKeyring`, `GLfloat`, `PaperInputElement`, `ExecutionEnvironment`, `SignatureKind`, `SchedulerApplication`, `GetApplicationCommandInput`, `EntityRecord`, `Frame`, `TodoItemNode`, `DraggingPosition`, `LibraryType`, `SlatePluginDefinition`, `PreferencesStateModel`, `IExecutionResponse`, `P8`, `DialogResult`, `RepositoryFacade`, `ZoomStore`, `UrlGeneratorsDefinition`, `MockNode`, `DeferredValue`, `RoomVisual`, `CameraService`, `DataSourceState`, `StatsModuleReason`, `WebhookOptions`, `GroupDataService`, `RendererInfo`, `MeasuredBootEntry`, `GroupsPreviewType`, `InputConfiguration`, `SharedPropertyTree`, `PutLifecyclePolicyCommandInput`, `StudioServer`, `tape.Test`, `TopicOrNew`, `MeterCCReport`, `ImplicitParjser`, `Override`, `RawConfigurationProvider`, `MeshAnimationTrack`, `CreateSubscriberCommand`, `KeyedAccountInfo`, `ILanguageRegistration`, `fabric.IEvent`, `RecordC`, `NoteDoc`, `VisualizationData`, `DataKind`, `RawNode`, `ThrottledDelayer`, `UpdateServiceCommandInput`, `ContentOptions`, `IMasks`, `RouteFilter`, `OrmConnectionHook`, `RadixTokenDefinition`, `ChildProcessWithoutNullStreams`, `Extrinsic`, `requests.ListAnnouncementsPreferencesRequest`, `MakeSchemaFrom`, `ProvideCompletionItemsSignature`, `ImageStyleProps`, `ConfigHandlerAndPropertyModel`, `CheckpointsOrCheckpointsId`, `BroadcastEventListener`, `DirectiveOptions`, `SequenceInterval`, `PSIVoid`, `OpenAPIV3.Document`, `DomElement`, `LeaveRequest`, `UntagResourceCommandOutput`, `RefetchOptions`, `JSDocTypeReference`, `IOrganizationContactCreateInput`, `PrismaClientFetcher`, `GraphQLResolveInfo`, `ResourcePackWrapper`, `ComponentWithAs`, `OptionParams`, `ModelMapping`, `requests.SearchSoftwarePackagesRequest`, `TypeAcquisition`, `SubjectSetConstraint`, `LockFileConfigV1`, `PredicateOperationsContext`, `Notifier`, `Web3`, `ScheduleConfiguration`, `RecordProvide`, `DeleteAppRequest`, `Axes`, `IDeviceWithSupply`, `BrowserInterface`, `ItemMetadata`, `RoosterCommandBarButtonInternal`, `api.Span`, `StatusUnfollow`, `IssueTree`, `KeyValueChangeRecord`, `ChangePart`, `Benchee.Benchmark`, `IndexOptions`, `IObservableValue`, `Meal`, `DefinitionResult`, `ast.ExternNode`, `League`, `SubjectKeyframes`, `Swagger2`, `IBoxPlotColumn`, `HappeningBreakpoint`, `Builtins`, `Composer`, `LazyDisposable`, `Stream.Readable`, `RouteComponentProps`, `CallMemberLikeExpression`, `Web3ProviderType`, `debug.IDebugger`, `TabInfo`, `FieldFormatEditorFactory`, `ServerMode`, `PacketParams`, `tfc.Tensor`, `mozEvent`, `IncomingMessage`, `ObservableSetStore`, `Input`, `TypeAlias`, `RouterAction`, `ParsedEnumValuesMap`, `EffectFallbacks`, `ReferencedSymbol`, `CircuitGroupState`, `SetConstructor`, `u32`, `UpdateDatasetCommandInput`, `Severity`, `Cwd`, `PathFilterIdentifier`, `Human`, `AdvertiseByoipCidrCommandInput`, `URLTransitionIntent`, `IElementRegistry`, `OrphanRequestOptions`, `BattleFiledData`, `PerformOperationResult`, `SpriteFontOptions`, `RootContext`, `CommunicationIdentifierKind`, `Protocol.ServiceWorker.ServiceWorkerVersion`, `RouterEvent`, `ObsidianLiveSyncSettings`, `ButtonWidth`, `RegisteredConnector`, `AnimKeyframe`, `FFT`, `BuildNode`, `RequestInformationContainer`, `SRTFlags`, `SettingModel`, `ESTree.Class`, `EstimateGasEth`, `apid.UnixtimeMS`, `ShouldSplitChainFn`, `DaffCart`, `TwingNodeType`, `SelectTool`, `AgentService`, `CategoryDescription`, `d.OutputTargetDist`, `ImageAlignment`, `SearchParameters`, `IVirtualDeviceResult`, `TestAudioBuffer`, `UnwrapNestedRefs`, `BVHNode`, `PropertyDetails`, `NamedImportBindings`, `RepoBuilder`, `TreeConfig`, `DataChunk`, `ToastrService`, `NotFoundErrorInfo`, `Stats`, `RenderInfo`, `UpdateChannelCommandInput`, `Network`, `EntityDispatcherDefaultOptions`, `MockCanvas`, `PurchaseProcessor`, `IMatrixConsumer`, `Chorus`, `GetRevisionCommandInput`, `TextProperty`, `CommandData`, `CallLikeExpression`, `IAccessToken`, `EntityDefinitionService`, `ConstantQueryStringCommandInput`, `AbstractMessageParser`, `WebpackWorker`, `ExecSyncOptions`, `IOSNotificationCategory`, `ImageLocation`, `TagScene`, `AcMapComponent`, `SFUManager`, `FactoryUser`, `ts.Symbol`, `SocketClass`, `IElementStyle`, `HttpFetchOptionsWithPath`, `PackagerAsset`, `VariableContext`, `Parameters`, `EventHit`, `GeomNode`, `BuildResults`, `sharp.Sharp`, `RangePointCoordinates`, `JSDocReturnTag`, `Config.InitialOptions`, `TObj1`, `SavedObjectsCreatePointInTimeFinderOptions`, `datetime.DateTimeData`, `FleetStatusByCategory`, `Transformed`, `ListManagementAgentPluginsRequest`, `SceneNode`, `UnionOrIntersectionType`, `FormModel`, `Stripe.PaymentIntent`, `PatchType`, `RecoveredSig`, `MODNetConfig`, `IORedis.RedisOptions`, `Models.CurrencyPair`, `CERc20`, `Hash`, `TInjectItem`, `ModelStoreManager`, `JoinTournamentRequest`, `HistoryRPC`, `ILabel`, `UnlitMaterial`, `Warning`, `ItemGroup`, `LogsEvent`, `SortedReadonlyArray`, `ConstantTypes`, `InvalidArgumentException`, `IStoreService`, `StacksConfigRepository`, `keyboardState`, `WebSocket.ErrorEvent`, `CreateApplicationResponse`, `TargetDisplaySize`, `TypeLiteralNode`, `ScreenshotDiff`, `LogMethod`, `ResolverClass`, `TaskWrapper`, `MapMaterialAdapter`, `NextConfig`, `ViewsWithCommits`, `AbstractCrdt`, `LazyIterator`, `CancellationTokenSource`, `Electron.IpcMainInvokeEvent`, `CodeType`, `QueryBeginContext`, `CombatantViewModel`, `IDebugResult`, `SupCore.PluginsInfo`, `ComponentDef`, `MediaQueryListEvent`, `SQLiteDb`, `OrganizationSlug`, `Assertion`, `DescribeEnvironmentManagedActionHistoryCommandInput`, `MemBank16k`, `OfflineSigner`, `GeneralObject`, `SemanticsFlag`, `SnotifyToastConfig`, `ButtonGroupProps`, `TensorArrayMap`, `ListRecipesCommandInput`, `PrerenderConfig`, `ListAutoScalingConfigurationsCommandInput`, `vscode.TextEditorDecorationType`, `StreamEmbedConfig`, `MarkSpec`, `OAuthConfig`, `DevicesButtonProps`, `UiActions`, `UriService`, `ExperienceBucket`, `InterceptorContext`, `requests.ListCertificatesRequest`, `ExternalSourceAspectProps`, `TouchControlMessage`, `HistoryEnv`, `ServiceFlags`, `IStringStatistics`, `IOEither`, `ToastMessage`, `ComponentCompilerWatch`, `MemoryX86`, `W7`, `SourceState`, `UITableView`, `Mail`, `InferenceFlags`, `AttributeValueSetItem`, `IManifest`, `EmployeeAppointment`, `Items`, `CostMatrix`, `Material`, `UploaderBuilder`, `TransportRequest`, `Nibble`, `SubInterface`, `CombinedReportParameters`, `Callback`, `HookName`, `GeoUnits`, `MultiIndices`, `CompletionMsg`, `estypes.SearchResponse`, `responseInterface`, `UserIdentifier`, `GoAction`, `CustomerRepository`, `LSPConnection`, `CellValue`, `MapOfClasses`, `CrowdinFileInfo`, `FormatCompFlags`, `AlarmSensorType`, `DeveloperExamplesSetup`, `CFCore`, `UpdateNote`, `THREE.Scene`, `ItemSpec`, `TdDataTableService`, `UpdateServerCommandInput`, `TextFont`, `ConfigArgs`, `DiscoverSetupPlugins`, `HttpService`, `JSDocAugmentsTag`, `FoamFeature`, `ImportClause`, `EntityCollectionServiceElementsFactory`, `PlayerLink`, `APIWrapper`, `IGiftsGetByContactState`, `CsvReadOptions`, `AxisScale`, `Fetcher.IEncrypted`, `PuppetASTClass`, `AutorunFunction`, `RollupBuild`, `StackProc`, `AwsVpcConfiguration`, `EngineConfigContent`, `StreamManager`, `GauzyAIService`, `PendingModifiedValues`, `UpSampling2DLayerArgs`, `GLTFNode`, `IRenderFunction`, `GLsizei2`, `InDiv`, `DataBySchema`, `TransportWideCC`, `FormProperty`, `FindQuery`, `ExpensiveResource`, `requests.ListModelsRequest`, `interfaces.BindingOnSyntax`, `ts.LeftHandSideExpression`, `TreeStructure`, `TimeSeries`, `SchemaConfigParams`, `core.IHandle`, `SphereGeometry`, `DAL.DEVICE_ID_THERMOMETER`, `ThermostatFanMode`, `GfxFormat`, `CompletionExpressionCandidate`, `NetworkState`, `PiLangExp`, `ISocket`, `RElement`, `AstNodeParser`, `ReactNativeContainer`, `IssueStatus`, `ConeLeftSide`, `SmoldotProvider`, `ResolvedEntityAtomType`, `ISlope`, `DateIntervalFormatOptions`, `PromiseQueue`, `VolumeTableRecord`, `Tmpfs`, `NotificationId`, `SwitchOrganizationCommand`, `DescribeExportTasksCommandInput`, `XDate`, `PlatformRender`, `ItemDefBase`, `SparseArray`, `UIResource`, `CollectionPage`, `ClassType`, `Variant`, `OpsMetrics`, `WorldmapPointInfo`, `IMonitoringFilter`, `UnlinkFromLibraryAction`, `WriteContext`, `IMigrationConfig`, `StyleRules`, `IndexKind`, `U8Archive`, `InitialStatistics`, `ScatterPointItem`, `CommandQueueContext`, `Typeless`, `Sorting`, `WasmSceneNode`, `LexoRank`, `pouchdb.api.methods.NewDoc`, `RateLimiter`, `StringASTNode`, `ManagedDatabaseSummary`, `MatDialogRef`, `FormControlProps`, `FormikErrors`, `MsgCloseBid`, `SkipListMap`, `BuildTask`, `CreateTemplateCommandInput`, `RtkQueryApiState`, `JobPostLike`, `IApiComponents`, `PokerScoreService`, `Stash`, `LeveledDebugger`, `ObjectGridComponent`, `SqrlKey`, `ElementSet`, `Repository`, `PartialEmoji`, `T.LayerStyle`, `Bluetooth`, `ThreadState`, `ParamIdContext`, `ValidationService`, `PointStyle`, `RateProps`, `fs.WriteStream`, `MemoryHistory`, `fGlobals`, `MetaDataModel`, `CandidateInterviewersService`, `Uint32Array`, `RenderArgs`, `ContractService`, `GetCommandInvocationCommandInput`, `StackMap`, `UrlParam`, `PossibleValues`, `StorageReference`, `AbbreviationTracker`, `MediaInfo`, `models.ChatNode`, `MutationResolvers`, `Survey.Base`, `Choice`, `ContentGroup`, `InstanceInfo`, `LinkParticle`, `PartialItem`, `ThyNotifyOptions`, `XsuaaServiceCredentials`, `BTI_Texture`, `CollectionDataStub`, `ScoreInstrument`, `AddTagsToResourceCommandInput`, `ThemePair`, `EnvironmentService`, `PropertyDrivenAnimation`, `TaskModel`, `Criteria`, `ColumnsSortState`, `QueryCache`, `ErrorMiddleware`, `BufferedTransport`, `AllSettings`, `ProfileIdentifier`, `CompilerEventDirDelete`, `AudioSelector`, `HubPoller`, `WindowsJavaContainerSettings`, `PeerRequestOptions`, `WrappedFunction`, `KaizenToken`, `IMiddleware`, `ICurve`, `BuilderRuntimeEdge`, `ISmartMeterReadingsAdapter`, `ValidationRuleMetaData`, `AvailableSpaceInConsole`, `PickTransformContext`, `QueryGroupRequest`, `ImageStyle`, `Router`, `PlanningRestriction`, `DataAdapter`, `SavedObjectsRawDocParseOptions`, `BulkUnregistration`, `ReactiveArray`, `IStandaloneCodeEditor`, `BoxShadow`, `Clique`, `LineType`, `NodeId`, `XPathResult`, `MenuServiceStub`, `MeshRenderer`, `ProposalTx`, `RemoteFilter`, `JsonParserTransformerContext`, `ProtocolName`, `ExtendedChannelAnnouncementMessage`, `PullAudioOutputStreamImpl`, `BuildOptions`, `MySQLParserListener`, `FirmaSDK`, `InvalidGreeting`, `Tensor1D`, `WeConsoleScope`, `PreviewService`, `ValueFillDefinition`, `IClassification`, `Git.VersionControlRecursionType`, `MdcDialogPortal`, `PacketEntity`, `AccountingEvent`, `DeleteApiKeyCommandInput`, `APISet`, `TimerActionTypes`, `Prism`, `TransformFunction`, `TableDifference`, `IGatsbyImageData`, `CompletionsProviderImpl`, `PointLight`, `ConstantExpressionValue`, `GetReplicationConfigurationCommandInput`, `MessageConnection`, `IPluginBinding`, `CurrentVersion`, `DocHeader`, `MarketData`, `ThanksHistory`, `LabelMap`, `GX.IndTexMtxID`, `BeancountFileContent`, `T.Layer`, `SourceEngineView`, `TwingCallable`, `Semester`, `AnyMap`, `MeetingSessionVideoAvailability`, `Foxx.Response`, `CSSSnippetProperty`, `EngineArgs.MarkMigrationAppliedInput`, `IGameCharaUnit`, `PipelinePlugin`, `com.mapbox.pb.Tile.IFeature`, `TypedGraph`, `Level`, `SerializedCrdtWithId`, `EnvelopesQuery`, `RefreshInterval`, `TransactionProto.Req`, `SupportedExt`, `sbvrUtils.PinejsClient`, `ProcessGraphic`, `SrtcpSSRCState`, `ModuleG`, `TextTheme`, `CanvasTypeVariants`, `MutationName`, `ConfigFileExistenceInfo`, `ContextValue`, `OptionComponent`, `ModifyGlobalClusterCommandInput`, `Movement`, `TemplateContext`, `PackageMeta`, `ESTree.Identifier`, `requests.ListDatabaseSoftwareImagesRequest`, `KeyboardLayout`, `NodeMaterial`, `SequenceDeltaEvent`, `SqrlErrorOutputOptions`, `ActiveToast`, `OhbugEventWithMethods`, `CoreConfig`, `SignedContractCallOptions`, `WordCharacterKind`, `HdEthereumPaymentsConfig`, `BracketPair`, `NonFungiblePostCondition`, `capnp.Data`, `Multiset`, `CandidateCriterionsRatingService`, `AuthenticatorFacade`, `IMemFileSystem`, `LayerListItem`, `xml.ParserEvent`, `EncodingQuery`, `ChannelContext`, `OriginOptions`, `GitHubCommit`, `ShellCommand`, `PutResourcePolicyCommandOutput`, `IDatabaseResultSet`, `requests.ListPingProbeResultsRequest`, `INamesMap`, `Builtin`, `UnlockedWallet`, `InvalidationLevel`, `Controller2`, `DiagramModel`, `RepositoryOptions`, `End`, `TextElementLists`, `ProjectControlFunction`, `DescribeAppInstanceCommandInput`, `backend_util.ReduceInfo`, `Sink`, `FSAOptions`, `StepName`, `SubmissionJsonPatchOperationsService`, `FnCall`, `FileStatWithMetadata`, `PuppetASTResolvedProperty`, `AzExtClientContext`, `ConfigurationDTORegions`, `IJetView`, `Serverless.Options`, `IMappingFunction`, `CertificateVerify`, `CreatePostDto`, `CreatePagesArgs`, `DatabaseContract`, `CheckReferenceOriginsParams`, `UIState`, `SyncRule`, `HeaderColumnChain`, `LocalMarker`, `ServerTranslateLoader`, `TextureCube`, `Sheet`, `ListOptions`, `IRequest`, `ProgressStep`, `CombinedScanResult`, `BillingModifier`, `GX.RasColorChannelID`, `GithubService`, `StateTree`, `MonthPickerProps`, `CSSDataManager`, `Cancel`, `TestIamPermissionsRequest`, `PlaneAltitudeEvaluator`, `SearchError`, `WebSocketEvent`, `AudioVideoEventAttributes`, `StridedSliceDenseSpec`, `Tied`, `TEConst`, `ChangeInstallStatus`, `OpenYoloCredential`, `ValuesDictionary`, `AttributeData`, `RemoteStream`, `ExtendedSocket`, `SeriesRef`, `AggregationRestrictions`, `ContractEventDescriptor`, `Mutable`, `NewsroomState`, `HandlerDefinition`, `IGarbageCollectionDetailsBase`, `GetObjectCommandInput`, `UnocssPluginContext`, `ItemSpace`, `EventSubscriber`, `UseRefetchReducerState`, `ConversationTarget`, `ts.EnumMember`, `ParsedAuthenticationInstructions`, `TokenRecord`, `InheritanceNode`, `NormalizedRuleType`, `EngineAPI.IApp`, `GenericTable`, `KoaContextWithOIDC`, `ReactText`, `ts.NodeArray`, `MovementComponent`, `MetamaskNetwork`, `ReadonlyQuat`, `V1StatefulSet`, `InvalidFormatError`, `IGeneratorData`, `ViewModelReducerState`, `CollectionInstance`, `UsageExceededErrorInfo`, `TransportType`, `HelloResponse`, `ExtractorResult`, `Shipment`, `NavigationTrie`, `IPeripheral`, `NoteItemComponent`, `Quantity.OPTIONAL`, `UpdateInputCommandInput`, `ENUM.SkillRange`, `AssetModel`, `MerchantStaffEntity`, `Factor`, `WlMedia`, `ExecutionMessage`, `LambdaType`, `UpdatePartial`, `TransitionStatus`, `WorkNodes`, `VerificationMethod`, `CellRenderer.CellConfig`, `DescribeDBSubnetGroupsCommandInput`, `Form`, `ModuleElementDeclarationEmitInfo`, `MutationArgsType`, `BuiltinFunctionMetadata`, `LIST_ACTION`, `Shadow`, `SearchStrategySearchParams`, `PredicateContext`, `TopNavMenuData`, `IContentItem`, `CreateSavedObjectsParams`, `BuildSettings`, `ScheduleState`, `Positive`, `BufferChannel`, `People`, `RouteDataFunc`, `ParameterApiDescriptionModel`, `ExpressionRenderHandler`, `Unchangeable`, `DeclarationInfo`, `BoostDirectorV2`, `PackageUser`, `ConditionsType`, `fhir.DocumentReference`, `UniqueEntityID`, `RecordBaseConcrete`, `Variable`, `NohmModelExtendable`, `Union3`, `JSXTemplate`, `IFileMeta`, `IStateBase`, `ParameterWithDescription`, `PropertyChangeResult`, `common.AuthParams`, `LinksFunction`, `IEmployeeProposalTemplate`, `GlobalContext`, `SecurityPermission`, `IWalletContractService`, `EntityApi`, `RequestEntryState`, `BSONType`, `OneOfAssertion`, `ListRoomsResponse`, `StackSpacing`, `QueryOrderOptions`, `BTCMarkets.currencies`, `OrganizationDocument`, `sdk.CancellationDetails`, `S3Resource`, `ClientJournalEntryIded`, `QuestionProps`, `ModelInfo`, `TestRunArguments`, `SymbolDisplayPart`, `ISemver`, `GanttBarObject`, `TopicInterest`, `HdPublicKey`, `VerifiedParticipant`, `React.ReactText`, `PatchSource`, `IKChain`, `ModuleSystemKind`, `ApplyPredicate`, `GfxSamplerFormatKind`, `ConfigurationLoader`, `PoolSystem`, `AggObject`, `IGenericEntity`, `PlacementResult`, `PlanetGravity`, `BitcoinPaymentsUtils`, `FinalDomElement`, `ODataEntitySetResource`, `RebootInstanceCommandInput`, `BuildState`, `CategoriesService`, `PiStyle`, `CliOutputOptions`, `DeleteJobCommandInput`, `Rarity`, `SolverConfig`, `PurchaseInfo`, `PageRect`, `LoggingInfo`, `ReactFrameworkOutput`, `MethodOptions`, `StateValue`, `RecordMap`, `EventUi`, `BIP44HDPath`, `OverlayConnectionPosition`, `TestCommander`, `ChartOffset`, `AppComponentDefinition`, `GeneralImpl`, `MessagePacket`, `UpdateWindowResizeSettings`, `UserRegistrationData`, `Showable`, `altair.LightClientUpdate`, `PrivateIdentifier`, `PDFKitReferenceMock`, `DigitalInOutPin`, `Git.IAuth`, `ListDatasetGroupsCommandInput`, `ReferenceSummary`, `IOOption`, `OperationLink`, `CloudAssembly`, `ListElementSize`, `FeatureState`, `GX.CC`, `DialogType`, `NamespacedAttr`, `SchemaArg`, `QueryCommand`, `PackageInfos`, `GoToTextInputProps`, `DiffuseMaterial`, `ElementDecorator`, `Units`, `QnaPost`, `LocalRenderInfo`, `LoadLastEvent`, `RetrievedCredentials`, `ServerSecurityAlertPolicy`, `AnimationBoneKeyFrameJson`, `TTypeProto`, `TransferParameters`, `SocketIoConfig`, `theia.WebviewPanelShowOptions`, `MatchedStep`, `CompileUtil`, `FacetSector`, `DefaultToastOptions`, `CacheManagerOptions`, `RippleSignatory`, `RewriteResponseCase`, `UpdateIdentityProviderCommandInput`, `LineRange`, `TextRenderStyle`, `d.FsWriteOptions`, `Operator`, `Dialogic.Item`, `SampleInfo`, `EntityFetcherFactory`, `IAnyObject`, `ComponentEventType`, `IndexInfo`, `VerticalAlignValue`, `SendMessageRequest`, `AS`, `IPluginConfig`, `TransactionsModel`, `ITagObject`, `UniversalRenderingContext`, `IScopedClusterClient`, `AuthorizationServiceSetup`, `InvokeCreator`, `DescribeScheduleCommandInput`, `ProjectTechnologyChoice`, `React.Reducer`, `SerializableObject`, `SerializationService`, `DeleteDBClusterParameterGroupCommandInput`, `UserSettings`, `XMLHTTPRequestMock`, `PreferenceProvider`, `QueryServiceClient`, `FrontMatterResult`, `TimeInput`, `SymbolResolutionStackEntry`, `UseQueryOptions`, `DestinationHttpRequestConfig`, `UnidirectionalLinkedTransferAppAction`, `ProductAction`, `DatasetStatistics`, `SpecQueryModel`, `AutoScalingMetricChange`, `DisclosureInitialState`, `SnapshotRelation`, `ServiceContext`, `SyntaxKind.Identifier`, `ICXSetup`, `AsyncBlockingQueue`, `HttpServiceSetup`, `ContextMenuItemModel`, `DirectionConfiguration`, `CapabilitiesProvider`, `NmberArray9`, `StandardAccounts`, `FocusOutsideEvent`, `ITransferItem`, `TranslationStorage`, `NamespaceObject`, `EventSummary`, `Ogg.IPageHeader`, `IUploadItem`, `IFinaleCompilerOptions`, `AzureCommunicationTokenCredential`, `IdentifierValue`, `SystemVerilogSymbolJSON`, `td.WebRequest`, `MouseWheelEvent`, `SettingActionTypes`, `WebGLRenderCompatibilityInfo`, `RawAbiDefinition`, `ChatStoreState`, `UnitState`, `TextProps`, `PymStorage`, `DataCharacter`, `CSharpField`, `AdbClient`, `OptimizelyXClient`, `ParameterConstraints`, `DashboardReport`, `CombatLogParser`, `RFNT`, `MockDocumentTypeNode`, `ProviderApi`, `AndroidActivityEventData`, `ApiResponseOptions`, `DaffNewsletterState`, `ObserverResponse`, `MockStoreCreator`, `WebCryptoEncryptionMaterial`, `ListTypesCommandInput`, `AppOption`, `Decipher`, `StripeModuleConfig`, `LegacySpriteSheet`, `KibanaFeatureConfig`, `RouterActions`, `VerificationClient`, `LoanCard`, `HierarchyChildren`, `RuntimeWorker`, `d.PrerenderConfig`, `TOptions`, `NetworkStatus`, `N1`, `MapPartsRailMoverNrv`, `RandomFunc`, `PackageJsonChange`, `IChangeInfo`, `ERC1155ReceiverMock`, `Arrow`, `PayloadInput`, `ConnectionContext`, `Equipment`, `Epic`, `GunScope`, `AndroidManifest`, `GlobalStateService`, `ExtractorEventEmitter`, `cdk.Stack`, `IApiConnection`, `moment.MomentStatic`, `NZBUnityOptions`, `CreateMemberCommandInput`, `BufferReader`, `ProfileData`, `AstSymbol`, `AddressAnalyzer`, `AnyItemDef`, `ProgressBar`, `METHOD`, `TestEnvironmentConfig`, `AnimVectorType`, `DocumentEntryIded`, `ZodTypeAny`, `VictoryPointsBreakdown`, `TransactionAndReceipt`, `ModuleLoaderActions`, `Web3Client`, `EntityID`, `TAbstractFile`, `ErrorChunk`, `CdkFooterRowDef`, `AttendanceMonth`, `ReactNodeArray`, `GameService`, `MessageDeserializationOptions`, `InteractiveState`, `CategoryType`, `FormatWrap`, `MoviesService`, `NbMenuService`, `PluginDebugAdapterContribution`, `ModifierToken`, `d.ServiceWorkerConfig`, `IMessageDefinition`, `Sequential`, `requests.GetZoneRecordsRequest`, `PartialCanvasTheme`, `SelectionShape`, `EventCreator1`, `GitLogCommit`, `Unit`, `ItemTypes`, `CORSOptions`, `TSESTree.Decorator`, `IndexMapping`, `IMainConfig`, `DatabaseQuery`, `VolumeBackupSchedule`, `HostService`, `AcceptFn`, `ConnTypeIds`, `TAggParam`, `AjaxConfig`, `ILocalDeltaConnectionServer`, `Prefs`, `EnvironmentTreeItem`, `FleetAuthzRouter`, `ListKeysRequest`, `Codefresh`, `StandardClock`, `IndicatorCCGet`, `GfxBufferP_GL`, `PagerXmlService`, `IndependentDraggable`, `TwitterUser`, `ManualServer`, `StatGroup`, `AngularFireStorage`, `GUIDriverMaker`, `BuilderDataManagerType`, `GanttDatePoint`, `PanelModel`, `requests.ListFunctionsRequest`, `EtherscanClient`, `IWorkflow`, `hm.BasicCredentialHandler`, `MsgStartGroup`, `SMTCallGeneral`, `KernelBackend`, `ChartSonify.SonifyableChart`, `IDocumentElementKey`, `NftType`, `NVM500Details`, `EntityTypeT`, `Val`, `X86Context`, `MutableVector3d`, `ClientStringService`, `DaffConfigurableProduct`, `PoiManager`, `LayerForTest`, `TrueFiCreditOracle`, `CreateTableOptions`, `Anomaly`, `BridgeInfo`, `ScopeGraph`, `ShaderSemanticsInfo`, `DebuggerMessage`, `SpotifyErrorResponse`, `cytoscape.EventHandler`, `SubFeaturePrivilege`, `IEventHubWizardContext`, `BaseQuery`, `ServiceScope`, `FormControl`, `BlockFactorySync`, `AuthenticationDataState`, `IColonyFactory`, `DashboardContainerFactoryDefinition`, `TwitchBadgesList`, `CompositeMenuNode`, `Code`, `RowHashArray`, `OptimizationContext`, `MultisigConfig`, `CoreModule`, `InternalOpts`, `IStorageUtility`, `d.JestEnvironmentGlobal`, `ReadonlyVec3`, `ItemUpdateResult`, `TFLiteNS`, `ExternalDMMF.Document`, `APIUser`, `PlotRowIndex`, `N5`, `FirebaseOptions`, `ClaimToken`, `KnobsConfigInterface`, `LedMatrixInstance`, `TestProject`, `KeyIdentity`, `LanguageCCSet`, `EventAxis`, `IApiProfile`, `TFLiteWebModelRunnerOptions`, `IConnectable`, `TreeView.DropLocation`, `OptionsAfterSetup`, `webhookDoc`, `PermissionDeniedState`, `Moc`, `OrderbookL2Response`, `requests.GetJobRequest`, `UserGeoLocations`, `KeyContext`, `vile.PluginList`, `ts.CommentRange`, `EventEmitter`, `LoopReducer`, `x.ec2.SecurityGroup`, `ng.ICompileProvider`, `InternalDiagnostic`, `ITestEntity`, `UpdateStackCommandInput`, `QueryParam`, `QPoint`, `FcNode`, `StoredPath`, `BlockchainClient`, `DisposeResult`, `PrintResultType`, `MathContext`, `IAttachmedFile`, `mat4`, `MediaSubtitlesRelation`, `TSlice`, `MDCTabBarView`, `RuleFix`, `Transactions`, `DrgRouteDistributionMatchCriteria`, `ArrayServiceArrToTreeOptions`, `LibraryEngine`, `firebase.firestore.FirestoreDataConverter`, `PyVar`, `t_44e31bac`, `EdgeCalculatorSettings`, `IncomeService`, `RouteDefinitions`, `Urls`, `BalanceChecker`, `PropsFieldMeta`, `ModelVersion`, `OhbugExtension`, `JStep`, `ISlideRelMedia`, `RecordSetWithDate`, `TCmd`, `BaseTask`, `QualifiedUserClients`, `GluegunFileSystemInspectTreeResult`, `UpdateActivatedEvent`, `CipherService`, `TsPaginatorMenuItem`, `d.HostConfig`, `Chatlog`, `ConstructorParams`, `LaunchContext`, `ManualOracle`, `AggregatedStat`, `PlayerEntity`, `DynamicActionsState`, `AnchorProps`, `FactoryRole`, `TagsViewState`, `mediaInfo`, `PrismaClientOptions`, `DescribeApplicationCommandInput`, `MsgSharedGroup`, `VRMSpringBoneGroup`, `_Transaction`, `VFS`, `pulumi.Resource`, `ComponentSetup`, `LoopBackFilter`, `AppxEngineStepGroup`, `UINavigationController`, `TagRenderingConfig`, `MatSnackBarRef`, `DocumentReference`, `QueryDeepPartialEntity`, `Subsegment`, `SWRKeyInterface`, `VideoTileController`, `ReadGeneratedFile`, `SwankConn`, `SplinePoint`, `HTMLMetaElement`, `BlobModel`, `ITestObjectProvider`, `GitManager`, `MultiSet`, `StatusIndicatorGenericStyle`, `JSXAttribute`, `LatLngLiteral`, `MenuItemConstructorOptions`, `EChartsCoreOption`, `DatabaseSchema`, `CompositeLocale`, `TextContent`, `ICustomerRepository`, `BillAmount`, `xlsx.Sheet`, `CursorPagingType`, `GatherShape`, `IEmbeddable`, `RumEvent`, `NexusPlugin`, `DraggableElement`, `CommentProps`, `GlobalEventDealer`, `GetBotCommandInput`, `PackagePolicyInputStream`, `DataWriter`, `IKernelConnection`, `PropertyPreview`, `DataModel.ChangedArgs`, `JMapInfoIter`, `HdBitcoinPaymentsConfig`, `Survey.Page`, `ToolingLog`, `EndCondition`, `AddMissingOptionalToParamAction`, `Envelope`, `ValueMetadataString`, `BuildFeatures`, `RealtimeAttendeePositionInFrame`, `MdDialog`, `SamplerDescriptor`, `CheckoutAddressesPage`, `DecorationSet`, `BellSchedule`, `NativeScrollEvent`, `ts.ObjectLiteralExpression`, `CancelableRequest`, `NodeFlags`, `CustomTypes`, `SubmissionSectionObject`, `requests.ListTaggingWorkRequestErrorsRequest`, `IPerformTasksCommandArgs`, `ARNodeInteraction`, `Routing`, `ValidationException`, `CeloTransactionObject`, `UpdateMigrationDetails`, `PrefV2`, `Install`, `JRPCResponse`, `StorageEvent`, `ColumnWidths`, `CandidateTechnologiesService`, `IExplanationMap`, `ThyTranslate`, `DocEntry`, `DashboardStart`, `RawSavedDashboardPanel620`, `MToonMaterial`, `JsonRpcResponseCallback`, `CompileContext`, `TypeConstructor`, `DownloadInfo`, `BaseOption`, `IHttpClient`, `ListEmailIdentitiesCommandInput`, `DescribeChangeSetCommandInput`, `ElementData`, `Log`, `FormatterOptionsArgs`, `GfxBufferP_WebGPU`, `SymbolOr`, `HostsByIpMap`, `NavigationGuardNext`, `Match`, `TestingModuleBuilder`, `com.nativescript.material.core.TabItemSpec`, `MSDeploy`, `ExceptionalOpeningHoursDay`, `SearchResponse`, `DueDate`, `RollupClient`, `SettingsType`, `TreeSelectionReplacementEventArgs`, `SearchComponent`, `IndicatorCCSet`, `KuduClientContext`, `MIRInvokeKey`, `SecretProvider`, `Specification`, `ChannelTreeItem`, `TextChangeRange`, `IIncome`, `TT.Step`, `CombinationKind`, `CSSVariables`, `DescribeDetectorCommandInput`, `ChartRequest`, `RedocThemeOverrides`, `Settled`, `Extras`, `SourceDir`, `NormalizedEsmpackOptions`, `xyData`, `monaco.editor.ITextModel`, `MatrixEntry`, `GetEnvironmentCommandInput`, `DbMempoolTx`, `PDFAcroText`, `MessageRequest`, `TxResult`, `CreatePresetCommandInput`, `RobotState`, `DBProvider`, `RegisterDomainCommandInput`, `NamespaceScope`, `AnyApi`, `LanguageCCReport`, `PluginSpec`, `SfdxOrgInfo`, `DynamoDB.ReturnConsumedCapacity`, `Frontstage1`, `vscode.TreeItem`, `DependencyDescriptor`, `ExpressMeta`, `FactoryResult`, `IWebhookMatchedResult`, `MVideoUUID`, `ReduxAction`, `Civil`, `Hostname`, `SubsystemType`, `PiTypeStatement`, `ExportInfo`, `UniswapVersion`, `Request`, `Widget.ResizeMessage`, `BalanceTransferPayload`, `SpotTag`, `PlayerInputModel`, `TrueConstraint`, `ContentTypeProperty`, `FetchableType`, `ITerminal`, `TestingRuntime`, `ResourceXML`, `PredicateProvider`, `FluidDataStoreContext`, `CoapForm`, `MediaManager`, `GenericDeclarationSupported`, `AzureDeployerService`, `IViewArgs`, `LibraryNotificationActionContext`, `CausalRepoCommit`, `ExpressionStatement`, `vscode.EventEmitter`, `ModifyPayloadFnMap`, `Cropping2D`, `ElementMeta`, `NormalRange`, `TSubfactionArmy`, `Telemetry`, `CheerioElement`, `CustomClientMetadata`, `ArcTransactionDataResult`, `ExoticComponent`, `TrueFiPool2`, `MountOptions`, `CompilerEventFileDelete`, `DateInput`, `ImportedCompiledCssFile`, `ViewportHandler`, `RNConfig`, `ListRuleGroupsCommandInput`, `NotificationType`, `ExportTypesRegistry`, `ErrorWithLinkInput`, `PropertyUpdatedArgs`, `ToastId`, `d.LoadConfigInit`, `GoalTimeFrame`, `QueueFunctions`, `requests.ListSourceApplicationsRequest`, `Bit`, `AuthenticationSession`, `cc.Node`, `GlyphplotComponent`, `JsNode`, `SimplifyOptions`, `ThisExpression`, `CanvasIconTypes`, `SpinnerProps`, `TaskGroup`, `Slicer`, `AppType`, `Deferred`, `ICanvasProps`, `UrlSegmentGroup`, `CommandLineParameter`, `TradeService`, `BlockchainService`, `TableRequestProcessorsFunction`, `ISdkBitrate`, `InitOptions`, `LinkedListNode`, `TileLevel`, `ZoneAwarePromise`, `IVec2Term`, `SerializeOpts`, `ChannelsSet`, `Bundler`, `CameraKeyTrackAnimationOptions`, `DidResolutionOptions`, `IResourceInfo`, `SecurityCCNonceReport`, `NgGridItemSize`, `AuxConfig`, `JobSavedObjectService`, `CompilerSystemRenameResults`, `Thenable`, `SymbolWithScope`, `Web3Utils`, `Builder`, `ChartTemplatesData`, `CloneOptions`, `ImportStatement`, `ListEnvironmentsCommandInput`, `ProfilePage`, `BaseField`, `Recipients`, `ShootingNode`, `yubo.IRecordMessage`, `AST.Node`, `GridGraphNode`, `RemoteUserRepository`, `WalletStore`, `CMB`, `Themed`, `CreateConnectionDetails`, `FlattenedFunnelStepByBreakdown`, `JobValidationMessageId`, `Pos`, `InterfaceDeclaration`, `IService`, `BaseCallback`, `GetConnectionsCommandInput`, `CombinedEntry`, `LicenseState`, `TimelineProps`, `CodeMirror.EditorFromTextArea`, `AWSOrganizationalUnit`, `UnvalidatedIndexingConfig`, `PullRequestReference`, `RoomLayout`, `MessageObject`, `RPCResponse`, `SelectBuilder`, `SectionState`, `ArrayTypeNode`, `ShortcutService`, `IndexPatternValue`, `CommandsMutation`, `AdminUserEntity`, `ExpressClass`, `workspaces.ProjectDefinition`, `IndexTemplate`, `AppConfirmService`, `FnU2`, `RunningState`, `AbstractCancellationTokenSource`, `three.Object3D`, `CoinPayments`, `ArmResourceTemplate`, `ColorModeRef`, `RewardManager$1`, `FundingStrategy`, `WalletMock`, `INetworkNavigatorNode`, `UserDescription`, `DependencyChecker`, `RockType`, `SVGMark`, `TableRowState`, `KeyRingService`, `Access`, `CountingChannel`, `LocalContext`, `KPuzzleDefinition`, `StorageObject`, `LocalStorageSinks`, `d.JestConfig`, `ConvectorController`, `STLoadOptions`, `EditArticleDto`, `VariableAST`, `Mapper`, `DockPanel`, `ITabData`, `SPort`, `TopAppBar`, `VitePluginConfig`, `SignatureDeclaration`, `EmailService`, `NodePbkdf2Fn`, `BitFieldResolvable`, `PurgeHistoryResult`, `next.Page`, `CurrencyDisplayNameOptions`, `MemoryInfo`, `IExpense`, `ISubgraph`, `SelectOptionBase`, `IResultGroup`, `Interaction`, `FixtureFunc`, `GfxrAttachmentSlot`, `StartInstanceCommandInput`, `ReqMock`, `word`, `EnumValueDefinitionNode`, `IRequestDTO`, `SecretWasmService`, `URIAttributes`, `UserProvider`, `GraphQLFieldConfigMap`, `CreateComponentCommandInput`, `EndOfDeclarationMarker`, `ExpressionFunctionTheme`, `TestRunner`, `QueryPlan`, `firebase.firestore.Timestamp`, `CachePolicy`, `SessionStorageSources`, `ReactElement`, `VroRestClient`, `TRPCClient`, `HydrateScriptElement`, `ViewEvent`, `IPropertiesElement`, `PdfCreator`, `Layouter`, `NodeTypes.IMessagingService`, `StringDocument`, `ITempDirectory`, `SkillLogicData`, `ng.ICompileService`, `ReduxReducer`, `ObservableQueryBalances`, `PrettierOptions`, `K.StatementKind`, `MatMulPackedProgram`, `DbService`, `ReportGenerator`, `BitmexSpy`, `OnTouchedType`, `VirtualCollection`, `Monster`, `VRMBlendShapeProxy`, `AutoCompleteContext`, `ITrace`, `DeleteApplicationRequest`, `Ellipse`, `DropOptions`, `NavAction`, `RedBlackTreeEntry`, `Alt`, `GanttService`, `CurveLocationDetail`, `WindupMember`, `EntityDto`, `TicketDoc`, `StylingContext`, `BilinearPatch`, `BaseTxInfo`, `Mountpoint`, `ArticleDetail`, `ModernRoute`, `next.Artboard`, `IEndpointOptions`, `IRepo`, `EndpointArgument`, `REQUIRED`, `PolyfaceAuxData`, `PutObjectCommandInput`, `ResolverRegistry`, `requests.ListCrossConnectsRequest`, `GetRequest`, `RpcClientFactory`, `GraphQLScalarType`, `configuration.uiType`, `CSG`, `CheckBoxProps`, `IUIProperty`, `Iterable`, `TypeNames`, `VariableLikeDeclaration`, `IFilterListItem`, `PreimageField`, `CreateAppInstanceCommandInput`, `RenameLocation`, `cc.Sprite`, `GoogleActionsV2AppRequest`, `StoreClass`, `IInputIterator`, `CompSize`, `AvailableMirror`, `Joi.ObjectSchema`, `MEvent`, `SharedMetricsPublisher`, `JsonFormsStateContext`, `CreateRuleCommandInput`, `PutDeliverabilityDashboardOptionCommandInput`, `Animated.Node`, `ProcessRepresentation`, `DataRow`, `SrtpSSRCState`, `GraphOptions`, `Circline`, `IInstruction`, `BlobEvent`, `GeoObject`, `ClientHttp2Stream`, `BNLike`, `GetAllAccountsRequestMessage`, `SessionsState`, `mat2d`, `ConfigKey`, `ProjectsActions`, `ViewContainerRef`, `ExecaReturnValue`, `DeepStateItem`, `SubmissionProgress`, `socketio.Server`, `MaterialData`, `ts.ModuleDeclaration`, `RequestsDataItem`, `ActionCreators`, `BuildListInstanceCreateOptions`, `SignedResponse`, `GameStateRecord`, `ListCV`, `messages.Pickle`, `TransactionDetails`, `LocationInformation`, `INetwork`, `RequestStatistics`, `CompilerEventBuildNoChange`, `Taro.request.Option`, `PriceSpec`, `CreateErrorReportInput`, `Models.GamePosition`, `Datafile`, `BuildVisConfigFunction`, `RecordSubType`, `apid.GetReserveOption`, `PostQueryVarsType`, `IGrid2D`, `RtcpTransportLayerFeedback`, `AnimationKeyframesSequenceMetadata`, `PcmChunkMessage`, `ConfigIntelServer`, `firebase.FirebaseError`, `Primitives.Point`, `ChangeInfo`, `LocalMicroEnvironmentManager`, `GaxiosResponse`, `ScaleContinuous`, `SafeHtml`, `IEventPlugin`, `Serenity`, `WindowManager`, `TypeDBTransaction`, `QualifiedRule`, `CreateDatasetImportJobCommandInput`, `ResolvingLazyPromise`, `MatrixArray`, `IHttpResult`, `DaoFilter`, `ISchema`, `UILog`, `Directionality`, `ProviderFrameElement`, `IpAddress`, `FilterEngine`, `Moized`, `State`, `NameBindingType`, `ColorSchemeName`, `WechatMaterialIdDTO`, `AttributionsWithResources`, `LoginItemProps`, `PSTTableBC`, `AllKindNode`, `ICommandPalette`, `GraphStoreDependencies`, `ImageDataLike`, `handleEvent`, `CreateChannelMembershipCommandInput`, `DefineMap`, `ABIDecoder.DecodedLog`, `GalleryActions`, `PasswordSchema`, `BoxGeometry`, `THREE.Box2`, `ts.TypeQueryNode`, `Sticker`, `IBuildTask`, `SnippetSession`, `HsQueryBaseService`, `Security2Extension`, `ClientHttp2Session`, `ByteWriter`, `A0`, `RenderCanvas`, `Waveform`, `IAggregateStructure`, `SearchResultsLayer`, `TokenResult`, `DiagnosticAddendum`, `SharedDelta`, `RouteArg`, `Whitelist`, `IParameterDefinitionsSource`, `RequestBody`, `ColDef`, `MergeQuerySet`, `StateContext`, `IConnectOptions`, `ICard`, `InvoiceEstimateHistoryService`, `TooltipPoint`, `Replace`, `LinearSearchRange2dArray`, `Unpacker`, `PixivParams`, `CSharpDeclarationBlock`, `EdmxMetadata`, `USB`, `ServiceHttpClient`, `ModalWindowProps`, `BitReader`, `LuaMultiReturn`, `IFileChanges`, `ConstraintTiming`, `TrendResult`, `requests.ListPublicIpsRequest`, `Sidebar`, `RoutingService`, `MatchCallback`, `SystemType`, `VIS0_NodeData`, `ChartConfig`, `GltfFileBuffers`, `ProfileStore`, `FetchArgs`, `PoolConnection`, `PostMessage`, `IsometricPath`, `PropertyDescription`, `InferredSize`, `IApiSecurityRequirement`, `TokenPosition`, `RenderableSprite3D`, `LoginModel`, `ScreenshotBuild`, `IAuthenticateOidcActionConfig`, `SignedOnUserService`, `ITestReporter`, `MagicMessageEvent`, `IndentToken`, `HashKeyType`, `RoomManager`, `HoveredNavItemPayload`, `CommandContext`, `TimeRaster`, `SceneMouseEvent`, `TRaw`, `SeoService`, `SuggestQueryInterface`, `HighlightRepository`, `NextcloudClientInterface`, `DataViewObject`, `IUserPPDB`, `FocusTrapManager`, `Uint64Id`, `ChatLoggedType`, `NativeReturnValue`, `RowList`, `MarkdownOptions`, `NodeCue`, `KeyboardNavigationAction`, `WebSocketAdapter`, `ComponentItem`, `IDeploymentStrategy`, `StynWalk`, `DBDocument`, `MapValue`, `ColumnsPreviewType`, `UniqueId`, `ConditionFilterType`, `UseCaseExecutorImpl`, `NetworkInterface`, `CommonStatusBarItem`, `SerializedPolicy`, `InputHandler`, `SessionTypes.Proposal`, `SnailfishNumber`, `IVottJsonExportProviderOptions`, `AssessmentTypeData`, `QR.QueryResult`, `IPty`, `UserAnnotation`, `PointComponentProps`, `ProtectedRequest`, `LogAnalyticsLabelView`, `DemoChildGenerator`, `XArgs`, `BOOL`, `FilterValueFormatter`, `SnackbarService`, `AkimaCurve3dOptions`, `BufferEncoding`, `msRest.RequestOptionsBase`, `ChildItem`, `UnauthorizedException`, `ITenantCreateInput`, `RequestHandlerEntry`, `HttpEffect`, `ITextFieldExpression`, `FieldTypeByEdmType`, `KeywordMatcher`, `ts.ClassDeclaration`, `Tenancy`, `UserSelection`, `egret.DisplayObject`, `GestureTypes`, `BigNumber`, `LastSnapshot`, `CopyImageCommandInput`, `AstNodeFactory`, `AddTagsCommandOutput`, `AsyncManager`, `RankState`, `ApiItem`, `P2PPeerInfo`, `IVueComponent`, `WebLayer`, `SubscriptionCallback`, `InsecureMode`, `ExtractResponse`, `Electron.App`, `Filesystem.FileExistsAsync`, `BaseError`, `TopicType`, `QueuePeekMessagesResponse`, `BuilderOutput`, `ExpressRouteCircuitPeering`, `TreeService`, `SketchName`, `BuilderDataManager`, `SqrlSlot`, `storeType`, `RecordsFetchFilters`, `NativeNode`, `BrowserIndexedDBManager`, `bbox`, `ContractFunction`, `requests.ListThreatFeedsRequest`, `UpdatePayload`, `VoyagerConfig`, `ExpressAdapter`, `TokenItem`, `monaco.editor.ICodeEditor`, `ShellString`, `LView`, `Year`, `TAuditReport`, `BriefcaseConnection`, `EqualityComparison`, `MutableVector4`, `RowAccessor`, `UserActionBuilder`, `Language`, `NodePositionOffset`, `DebugState`, `MultiMaterial`, `FrameOffset`, `SpeechRule`, `GetIdentityVerificationAttributesCommandInput`, `sdk.SpeechRecognitionEventArgs`, `ResolvedModuleFull`, `BankTransfer`, `ParseNode`, `ConstructionSite`, `ng.IFilterService`, `Meeting`, `NavigationEntry`, `TaskCustomization`, `RangeValue`, `IInjector`, `AtlasResourceSource`, `ITypedDump`, `IScoutStems`, `DocumentPositionStateContext`, `ServiceConfigurationOptions`, `MarkovChain`, `FileDoc`, `InternalDefaultExpression`, `URLDescriptor`, `ActionItem`, `TextInput`, `WhereGrouper`, `Filesystem.ReadJsonSync`, `MessageGeneratorImplementation`, `AttributeOptions`, `Evaluation`, `core.BIP32Path`, `AnyAction`, `CmsModelFieldToGraphQLPlugin`, `SearchResult`, `ITokenizer`, `RadixSpunParticle`, `NerModelVersion`, `WorkItemUI`, `LogItem`, `GfxTextureSharedP_WebGPU`, `ModuleDest`, `IESAggField`, `IDocumentReference`, `IdSelector`, `KnownDomain`, `IPdfBrick`, `Timers`, `ConditionOperator`, `server.Server`, `AppConfigType`, `VirtualFolder`, `PlacementType`, `Displayable`, `_Column`, `TextView`, `FormatterService`, `lsp.Connection`, `Stores`, `DeviceTypes`, `BotConfig`, `TimelineMax`, `CheckpointTrie`, `SupportedFiletypes`, `EnvPair`, `RARC.JKRArchive`, `ExecutableItem`, `CacheChangeEventData`, `ReviewId`, `EngineTypes`, `BitBucketServerPRComment`, `ServiceWorkerRegistration`, `DragAction`, `ObservableQuerySecretContractCodeHash`, `ApolloReactHooks.MutationHookOptions`, `TransferTransactionUnsigned`, `SurveyTemplateRendererViewModel`, `ViewType`, `NPCActorCaps`, `PathSolution`, `EndorsementPolicy`, `TArrayValue`, `TouchState`, `TIdType`, `GetStaticPropsContext`, `BinaryOpComplexProgram`, `ExplorationInfo`, `EntitySet`, `ListParticipantsRequest`, `PanelProps`, `FastifyTypeBoxRouteOptions`, `ReferenceArray`, `GVBox`, `PortRange`, `ObjectDefinitionBlock`, `CallHierarchyIncomingCall`, `UserUpdate`, `DataListItem`, `DepNodeAssembly`, `EdmxEntitySet`, `TraitLabel`, `FieldStruct`, `ObjectUpdatesService`, `WebsocketService`, `AnyProps`, `XDomain`, `IPercentileAggConfig`, `TestColdObservable`, `XNodeData`, `MessageWorkflowMapping`, `DatasetEntry`, `RawDimension`, `FileDiagnostics`, `JobCreatorType`, `FileChunkIterator`, `Progress.IChunk`, `Province`, `KVS`, `AutoFix`, `TestServiceContext`, `InputTypes`, `DiscoverUrlGeneratorState`, `AddValue`, `CallGNode`, `ContainerFormData`, `THREE.Camera`, `IBuildTaskConfiguration`, `PrefetchOptions`, `DefaultRouterOptions`, `EnableOrganizationAdminAccountCommandInput`, `SerialAPICommandContext`, `IOrganizationTeamCreateInput`, `AuthTokenResult`, `CheckoutState`, `any`, `NodeCryptoCreateHash`, `DetectionResultItem`, `EmitterSubscription`, `CodePointCharStream`, `CIImage`, `SNSTopicArnNotFoundFault`, `JSONWebToken`, `AppletType`, `CSSScope`, `CreateConnectorResponse`, `GetSessionCommandInput`, `DataState`, `...`, `ExternalModuleInfo`, `ImportedData`, `ForwardTsnChunk`, `SpriteArgs`, `ListItemBase`, `AppSources`, `CommandArgs`, `d.CollectionCompilerVersion`, `PerPanel`, `TimeScaleUnit`, `Checkpoint`, `ICoordinates`, `ReaderFragment`, `Uint8Array`, `ImGui.IO`, `ChangePasswordState`, `EarlyStoppingCallbackArgs`, `StableRange`, `IAmazonServerGroupCommand`, `SavedObjectsFindOptions`, `MGLMapView`, `DouyuPackage`, `ChromeBreadcrumb`, `DeserializeFunction`, `SearchThemeAttributes`, `CodeGeneratorFileContext`, `FormEntry`, `ConstructorTypeNode`, `FrameworkVersionId`, `RepositoryEditWriteModel`, `NPMContext`, `Failure`, `ArrayCriteriaNode`, `PrismaObjectDefinitionBlock`, `AuthResult`, `Collapse`, `ModelValue`, `LayoutAction`, `MockCamundaWorkflow`, `NotifyModel`, `CustomEditorUpdateListener`, `StatedBeanContainer`, `PublicDeviceDTO`, `request.CoreOptions`, `Comm`, `NbJSThemeOptions`, `GPUData`, `ui.Rectangle`, `ts.EnumDeclaration`, `HttpOperationResponse`, `ClientIntakeFormIded`, `Arity1`, `PromiseFulfilledResult`, `ParserInputWithCtx`, `RelatedClassInfo`, `ValidationResponse`, `lf.schema.Table`, `UploadableMap`, `requests.ListInstancesRequest`, `SourceType`, `EmbedProps`, `PdfSolidBrush`, `IRule`, `ParseFunction`, `WebGLTransformFeedback`, `LocationResult`, `TMap`, `ConstantSchema`, `UnwrappedArray`, `IPartialDeploymentTemplate`, `ComputeVariant`, `t.MemberExpression`, `IntPair`, `LineIds`, `DeleteUserProfileCommandInput`, `Mdast.Parent`, `FieldItem`, `RepositoryState`, `ExistsFilter`, `Crop`, `AsyncActionCreators`, `EthUnlockRecord`, `ScriptLike`, `GMSMapView`, `V1WorkflowOutputParameterModel`, `DataContextGetter`, `arc_result`, `ReportConfigurationModel`, `KeyLoader`, `TFEOpAttr`, `CmdParts`, `TwingOutputBuffer`, `EntityReference`, `MeetingSessionStatus`, `ToastPosition`, `Cubic`, `ObjectCallback`, `Security2CCMessageEncapsulation`, `Highlighter`, `ICommandMapping`, `KeywordPrefix`, `IUserService`, `MySet`, `SharingSession`, `DeleteProjectResponse`, `messages.Meta`, `IconMap`, `IEvents`, `PingPongObserver`, `Totals`, `PuzzleLoader`, `CaptionDescription`, `BackstackEntry`, `Debouncer`, `DidChangeLabelEvent`, `FeatureMap`, `MonoStyleViews`, `SBDraft2CommandOutputBindingModel`, `DbEvent`, `FeatureRegistry`, `Maximum`, `GunGraphConnector`, `IntervalCollection`, `TestStore`, `RegExpMatcher`, `DocumentService`, `ListAliasesCommandInput`, `ScenarioState`, `IVectorV`, `IProjectSetupData`, `InstallablePackage`, `requests.ListCpesRequest`, `RollupError`, `vscode.ShellExecutionOptions`, `EditableCell`, `TCollection`, `d.BuildTask`, `ViewWithBottomSheet`, `CdsNavigationItem`, `XPortalService`, `DAL.DEVICE_ID_BUTTON_B`, `JsonRpcProvider`, `Trade`, `AngularFire`, `RenderModel`, `GraphQLHOCOptions`, `DBInstance`, `FontWeightType`, `TransactionCtorFields`, `NodeAnnouncementMessage`, `UiActionsEnhancedSerializedEvent`, `AccountMongoRepository`, `DroppableProvided`, `ListTasksRequest`, `EditService`, `MenuPath`, `ListResolversRequest`, `NotificationTargetItem`, `ListHttpProbeResultsRequest`, `ReuseTabCached`, `MouseCoordinates`, `Mention`, `GetBlacklistReportsCommandInput`, `EdgeInsets`, `IBackendApi`, `VocabularyService`, `ReadableSignal`, `DaffSubdivisionFactory`, `DomCallback`, `TaskNow`, `RenderColumn`, `SemanticDiagnosticsBuilderProgram`, `YarnPackageManager`, `RouterRes`, `IndexService`, `SerializedEntity`, `SfdxFalconResult`, `ConnectionService`, `ChartAntVSpec`, `ElectronStore`, `CatDTO`, `StoreModel`, `ConfigItem`, `PluginInfo`, `CHR0`, `InvoicesService`, `ExtraControlProps`, `PackageJSON`, `DeviceInfo`, `OptimizationResult`, `ActivityAttendance`, `TransformContext`, `RawToken`, `SunBurstHierarchyNode`, `TransformPointFn`, `EntityAddOptions`, `UpdateClusterRequest`, `EnvConfig`, `RenderTextureInfo`, `ArticleOverview`, `UriResolver`, `ExprVis`, `MyTabItem`, `ApexTestGroupNode`, `TemplateHead`, `ListComponentsCommandInput`, `CommandEntityBuilder`, `FabricObject`, `GfxCullMode`, `ThreadChannel`, `VisibilityVertex`, `ApiConfiguration`, `PrintJsonWithErrorsArgs`, `Swagger.Schema`, `ClipboardJS`, `FormAppSetting`, `CtrTextureHolder`, `IDBCursorDirection`, `A6`, `requests.ListPackagesInstalledOnManagedInstanceRequest`, `InvitationDTO`, `AggsMap`, `TileReadBuffer`, `IdentityPermissions`, `KeyboardEventArgs`, `EnrichedAccount`, `ChannelMessageRemove`, `MediaSource`, `ITimelineItem`, `HTMLOListElement`, `IFluxAction`, `Node.Expression`, `PropEnhancerValueType`, `SolidityListener`, `ShaderDescriptor`, `ManagedEvent`, `RLANKeyframe`, `AnimationPromise`, `QueryProvidersRequest`, `AsyncDiffSet`, `EVMEventLog`, `IBaseProps`, `CategoryAxis`, `NamespacedWireDispatch`, `ViewBoxParams`, `TypedNavigator`, `PolyserveApplication`, `GitlabUserRepository`, `Rx.PartialObserver`, `ListableObject`, `Backend`, `WebGLFramebuffer`, `IKeyboardBindingProps`, `GX.KonstAlphaSel`, `PDFArray`, `FlatpickrFn`, `SubscriptionAccountInfo`, `OpenLinkProfiles`, `TestTimelineDataProvider`, `Identifier`, `CreateFleetCommandInput`, `GfxBindingLayoutDescriptor`, `ObjectExpression`, `SharedContentImp`, `HierarchicalEntity`, `LineResults`, `DeleteStackCommandInput`, `VirtualConfig`, `ToggleState`, `ProxyDao`, `DeeplinkPayPayload`, `UploadFileStatus`, `IActionInputs`, `CarouselButton`, `AnyChannel`, `TextFormatter`, `MockApiClient`, `ODataCallable`, `PieChart`, `UserResumeEntity`, `ValidationControllerFactory`, `ENDStatement`, `BotTagMasks`, `LoadAll`, `DeleteDomainResponse`, `ColorPickerProps`, `ITestContainerConfig`, `GoogleMeetSegmentationConfig`, `CollapseGroupProps`, `IEmitterBehavior`, `ConnectorReferenceHandler`, `IResolver`, `PrintTypeFlags`, `ExtendableMessageEvent`, `ChainsService`, `angular.ui.bootstrap.IModalServiceInstance`, `PddlWorkspace`, `EntityComparer`, `Topology`, `ProjectReflection`, `IComponentState`, `V3RouteWithValidQuote`, `SWRResponse`, `SpanKind`, `TargetSummary`, `EffortInfo`, `ArenaAttribute`, `ICommand`, `TextRenderer`, `StdSignature`, `FunctionDefinitionContext`, `PowerlevelCCReport`, `ThyFormDirective`, `ParityRegion`, `CdsRadio`, `CreateAliasCommandInput`, `OrderDoc`, `ITokenPayload`, `PriceAxisViewRendererData`, `sdk.ConnectionEventArgs`, `DebugProtocol.Request`, `OneofDescriptorProto`, `Element.JSON`, `ActionData`, `RouteMeta`, `LocalBlockExport`, `DOMString`, `VirtualItem`, `Capabilities`, `DeepLinker`, `FatalErrorsSetup`, `MatStepperIntl`, `ResizeStrategy`, `types.CSSProperties`, `VSvgNode`, `WorldState`, `NumberSystemName`, `EditorChangeLinkedList`, `DragSourceArguments`, `requests.ListRulesRequest`, `PointerCtor`, `PartialStoryFn`, `UpdateProfileCommandInput`, `jsdom.JSDOM`, `SocketContextData`, `IFirmware`, `CloudProvider`, `IMetadata`, `CaseClause`, `ICandidateFeedbackCreateInput`, `QRCode`, `ScaleContinuousType`, `BindingOrAssignmentElementTarget`, `ClusterContract`, `StringProperty`, `Type_AnyPointer_Unconstrained`, `GUILocation`, `IGenericTagMapper`, `ChatChannel`, `OfficeLocation`, `GenericFormProps`, `InitializationOptions`, `MappedTopicsMap`, `MultiAPIMerger`, `IAggConfig`, `WithContext`, `FunctionBuilder`, `ElementModels.IElementWrapper`, `DragDropManager`, `ast.ExpressionNode`, `StakingTransaction`, `NamespaceNode`, `FormulaDescriptor`, `TestBackend`, `Snowflake`, `IClusterHealthChunk`, `UniDriver`, `GridStackItemWidget`, `TimeBin`, `StringifiedType`, `ConfigHttpLoader`, `ICache`, `ParserArgs`, `Tagged`, `IExpressionEvaluationContext`, `QueryParamDef`, `PropertyAssignment`, `DocumentQuery`, `ElementLocation`, `IOpenFileItemsState`, `ReaderContext`, `NextApiResponse`, `LanguageClientConstructor`, `ReflectionGroup`, `TfsCore.TeamContext`, `RequestResponse`, `SiteConfiguration`, `CharMap4`, `RouterState`, `Graphics`, `RectangleObject`, `KeyboardKey`, `NullAction`, `ProvisioningParameter`, `GitClient`, `ErrorResponseData`, `Stub`, `CompilerConfig`, `ListRecommendationsRequest`, `ATNState`, `IAssetComponent`, `ReactNode`, `Animated.AnimatedInterpolation`, `IdentifierToken`, `Triangle3`, `FormatType`, `ContractCallReturnContext`, `ISimplestConfigFile`, `ExpectationRepository`, `ProviderCard`, `FormatState`, `DescribeWorkspaceCommandInput`, `RuleManager`, `VersionInterface`, `TeleportContext`, `UserTypeReference`, `CreateUserDto`, `ArmArrayResult`, `rpcConfig`, `d.JsonDocsUsage`, `IClass`, `WrapLocals`, `RemoveRoleFromDBClusterCommandInput`, `ControllerParameterMetadata`, `HsDimensionTimeService`, `ComboConfig`, `AudioOptions`, `ResultDate`, `MessageContext`, `Events.pointerup`, `GetCapabilitiesXmlLayer`, `ArrayMultimap`, `JobID`, `constructor`, `Highcharts.StockToolsNavigationBindings`, `PrivateLinkConnectionApprovalRequestResource`, `EthereumTransactionTypeExtended`, `ChatLogged`, `FileConfig`, `IGlTFParser`, `HouseCombatData`, `PlaywrightElementHandle`, `nls.LocalizeFunc`, `CountryService`, `B8`, `LocalForageObservableChange`, `PositionService`, `BigIntMoneyBase`, `MathBackendWebGL`, `HandlerResult`, `ActionHandlerWithMetaData`, `LocaleNode`, `KeySignature`, `SeedGenerator`, `ValueDB`, `SlashDot`, `YoonitCamera`, `ParquetSchema`, `TableDiff`, `TaskDoc`, `FilterCallback`, `ServiceType`, `DuiDialog`, `QueryBidResponse`, `BuddhistDate`, `Stylesheet`, `CallbackAction`, `RepeatVectorLayerArgs`, `EmitterWebhookEvent`, `BlockNumberRepository`, `LocaleSpec`, `Comonad1`, `PathOptions`, `Frontier`, `ActionBar`, `Controls`, `IsoBuffer`, `CurriedFunction1`, `TriggerInteractions`, `CompilerWorkerContext`, `TEX0Texture`, `IteratorCreatorFn`, `es.CallExpression`, `TestDataset`, `DecoratedError`, `AvatarSize`, `GoogleAuth`, `PlasmicASTNode`, `SrtpSsrcState`, `NoteCollectionState`, `IItem`, `ListRecommendationsResponse`, `Integration`, `DeleteOneOptions`, `MatcherCreator`, `ModState`, `ProjectClient`, `PackageInstallationResult`, `DescribeChannelMembershipCommandInput`, `ComponentEmitInfo`, `LibraryInfo`, `StatFrame`, `EditableCircle`, `Print`, `ImGuiIO`, `TaskTreeItem`, `CKEDITOR.eventInfo`, `middlewareSingle`, `ConnectedSpaceId`, `AES`, `IIconOptions`, `SinonFakeServer`, `iReduxState`, `ComponentRef`, `ArrayAccessValue`, `d.BuildLog`, `RailPart`, `ExtrusionFeatureParameters`, `TelegramBot.Message`, `XUL.menupopup`, `MpProduct`, `IRepositoryState`, `AlertState`, `EventStoreDescription`, `RivenMod`, `MockConfiguration`, `EventCategoriesMessage`, `IBBox`, `PaginationInput`, `GraphQLInputType`, `VoidFn`, `FastifyRequest`, `MWCListIndex`, `WidgetType`, `ClientRect`, `ConfigurableEnumValue`, `Exception`, `AV1RtpPayload`, `DocumentRegistryBucketKey`, `FlagValidatorReturnType`, `ImmutableFrameTree`, `TransferService`, `ContextItem`, `FormValidator`, `Float64Array`, `WebRTCConnection`, `PropsType`, `WholeHalfNone`, `Directory`, `JsonParserContext`, `Geocoder`, `Data`, `SnippetProvider`, `Modify`, `MapChart`, `TemplateStruct`, `TItem`, `TableConstructor`, `RawJSXToken`, `StatusVectorChunk`, `ISubscriber`, `AuthenticateFn`, `NumberRowModel`, `OperatorContextFunction`, `ts.MapLike`, `TypeAssertionMap`, `ExecutionResultDataDefault`, `PaginationOptions`, `DashboardSetup`, `https.RequestOptions`, `CoverLetterService`, `MediaMatcher`, `FsReaddirOptions`, `IDeferred`, `SimpleSwapAppState`, `DharmaMultiSigWalletContract`, `RollupSingleFileBuild`, `VisualizationsStart`, `ComponentModule`, `AnimationInstance`, `RedisClientOptions`, `OptionsHelper`, `TETuple`, `IConfirmService`, `quantumArray`, `UrlEntity`, `DocumentSymbolCollector`, `TransactionEntityDataService`, `MergeProps`, `DashPackage`, `Reaction`, `Conditions`, `protocol.FileRequest`, `EarningsTestService`, `TAccesorKeys`, `BlockBody`, `SharedRoleMapping`, `Prepared`, `GetMessagingSessionEndpointCommandInput`, `Chai.ChaiStatic`, `FairCalendarView`, `IListProps`, `FlowPostContextManagerLabel`, `UserReference`, `TFnRender`, `STORES`, `And`, `cytoscape.Core`, `FirebaseFirestore`, `ClusterData`, `PrometheusClient`, `EntityCache`, `QuestionAdornerComponentProps`, `SCServerSocket`, `IListViewCommandSetExecuteEventParameters`, `IEdgeRouter`, `OutputTargetDistCollection`, `IGetPatchDirResult`, `TableNS.CellProps`, `Exchange`, `ParseConfig`, `SettingTypes`, `SourcePosition`, `requests.ListDrgRouteRulesRequest`, `ColorT`, `tsdoc.DocComment`, `EntAsset`, `messages.Examples`, `ProjectionRule`, `C6`, `WidgetState`, `TeamCity`, `GoToFormProps`, `Favorite`, `MaybeAccount`, `EslintConfig`, `RepairTask`, `Application.RenderOptions`, `OctreeObject`, `RecordType`, `IBenefitsSearchResult`, `FilterQuery`, `SendTxnQueryResponse`, `WhereCondition`, `PackageTypeEnum`, `StridedSliceSparseSpec`, `OutRoomPacket`, `GraphBatchedTransferAppState`, `RSSItem`, `MarkdownContributions`, `RxFormBuilder`, `BLSPubkey`, `Transaction`, `LastFmArtistInfo`, `W5`, `EmailAddress`, `UseRefetchReducerAction`, `TriggerState`, `NumberMap`, `tag.ID`, `TFlags`, `LineMessageType`, `TransactionButtonInnerProps`, `SpeakerService`, `InputParamValue`, `ByteOrder`, `Stapp`, `PreferencesCategories`, `DeleteConfigurationSetCommandInput`, `Fetcher`, `CSymbol`, `UserInfo`, `Focusable`, `MergeRequestPayload`, `APIService`, `HydrateResults`, `ReporterRpcClient`, `SortOrder`, `StudentRepository`, `ModifyDBClusterCommandInput`, `StepListener`, `HTMLCollectionOf`, `UntagResourceOutput`, `SnapshotConnection`, `IConnectToGitHubWizardContext`, `Lines.Segment`, `IdentityProviderMetadata`, `WindowType`, `IKbnUrlStateStorage`, `Config.Argv`, `SVGIconProps`, `RelationMetadata`, `BooleanInt`, `ListDeploymentStrategiesCommandInput`, `S2L2ALayer`, `requests.ListIpv6sRequest`, `MatSnackBarConfig`, `ListGroupsResponse`, `BlockHandle`, `SyncData`, `IRandomReader`, `PDFCheckBox`, `Resource`, `BasicEnumerable`, `ScryptedRuntime`, `LoggerFactory`, `ProtonApiError`, `EdaPanel`, `DatasetOpts`, `ManifestInventoryItem`, `StateOperator`, `TEX0`, `Functions`, `SignedStateReceipt`, `ConsumerParticipant`, `CpeDeviceConfigAnswer`, `IExecutionFlattedDb`, `AccessDeniedException`, `x`, `LocalVideoStreamState`, `ReactTypes.DependencyList`, `DMMF.SchemaField`, `IValueFormatter`, `MapIterator`, `DomainBounds`, `CrochetForNode`, `LogAnalyticsSourceDataFilter`, `EnumItem`, `ButtonTool`, `PieDataSet`, `_IPBRMetallicRoughness`, `CommandFn`, `AdditionalPropsMember`, `Matched`, `ITiledLayer`, `MethodName`, `WebGLTensor`, `IAssetMetadata`, `MetaProperty`, `HTMLTableRowElement`, `AuditInfo`, `StartJobCommandInput`, `CreateChannelModeratorCommandInput`, `TypeMapper`, `BoxSlider`, `GetByIndex`, `OBJLoader`, `VideoCreateResult`, `ReadStorageObjectId`, `vscode.DebugConfiguration`, `JitsiPeer`, `FungibleConditionCode`, `ScaleQuantize`, `MonthlyForecast`, `DetailedReactHTMLElement`, `ProviderOption`, `HiFiCommunicator`, `CdkRowDef`, `JumpPosition`, `InheritedProperty`, `IDataFilterConfiguration`, `SmoothedPolyline`, `OnePoleFilter`, `RevocationStatus`, `ValidatorStore`, `IModels`, `GADNativeAd`, `UserPool`, `Struct`, `AllDocsResponse`, `MessageSecurityMode`, `GrantAccessData`, `AppModel`, `RegisteredRuleset`, `ir.Block`, `ListRealtimeContactAnalysisSegmentsCommandInput`, `GenericClientConfiguration`, `Work`, `HeadBucketCommandInput`, `StreamEvent`, `Cons`, `StoreEnhancer`, `MemoizedFn`, `ReuseCustomContextMenu`, `GenericRetrier`, `ApiDef`, `Timings`, `Glue`, `ptr`, `AggregateRoot`, `IPluginPageProps`, `IpRecord`, `ClientOpts`, `AccountNetwork`, `ParseArgument`, `IpGroup`, `DashboardService`, `ISearchFeature`, `DevicesService`, `CameraHelper`, `PermissionStatus`, `SegEntry`, `SignaturePad`, `EffectAction`, `CreateDataSetCommandInput`, `DataRows`, `IWaterfallSpanOrTransaction`, `Events.exittrigger`, `CommandOutput`, `ActionCreatorFactory`, `ThemePlugin`, `RolesService`, `Mappings`, `SimpleGridRecord`, `ViewBox`, `MenuValue`, `TinaSchema`, `LexicalToken`, `GfxReadback`, `ts.Printer`, `WidgetIdTypes`, `IPolicy`, `Joi.ValidationResult`, `MyElement`, `PanService`, `TimeState`, `PayloadAction`, `UseMutationResult`, `DispatchQueue`, `ISummaryRenderer`, `GetBinaryPathsByVersionInput`, `FormDataEvent`, `ValidateKeyboardDefinitionSchemaResult`, `JsonMap`, `RibbonEmitter`, `BeachballOptions`, `SetOption`, `IngredientForm`, `SettingContext`, `CoreTypes.VisibilityType`, `ArrayServiceGetKeysByTreeNodeOptions`, `PolicyRequest`, `Inline`, `AVRExternalInterrupt`, `RippleConfig`, `Sampler3DTerm`, `fhir.Location`, `IClientOptions`, `PerformanceTiming`, `AtomicToken`, `AccountStore`, `ConnectionLocator`, `Puppeteer.Page`, `SigningCosmWasmClient`, `MapData`, `_Exporter`, `EducationalMaterial`, `KeyboardScope`, `FactoryBuilderQueryContract`, `TreeNodeViewModel`, `HydrationContext`, `CodeEditor.IToken`, `files.FullPath`, `RPiComponent`, `IAngularMyDpOptions`, `IVector3`, `ICreateData`, `IApiParameter`, `RexFile`, `ReferenceType`, `ITranslateConfig`, `MessageEntity`, `PathValue`, `Lut`, `Capability`, `InjectorType`, `Face`, `Beatmap`, `CallHierarchyItem`, `T.Task`, `IVpc`, `MarketFiltersState`, `IKeysObject`, `MemberExpression`, `requests.ListExternalNonContainerDatabasesRequest`, `FunctionComponent`, `NameNode`, `Single`, `NavSegment`, `GanacheRawExtraTx`, `FirebaseFirestore.Firestore`, `EquipmentSharingPolicy`, `ExtensionConfig`, `events.EventEmitter`, `SelectionInfo`, `OperationDefinition`, `LangiumDocument`, `MockNexus`, `BitMatrix`, `GrowStrategyMock`, `CheckResult`, `GfxRenderTarget`, `CreateUserInput`, `KeyedDeep`, `GaussianNoise`, `CGRect`, `EitherAsyncHelpers`, `TruncateQueryBuilder`, `JSDoc`, `BitcoinishTxBuildContext`, `TreeNodeState`, `UserGroupList_UserGroup`, `GetUrlFn`, `ParamModel`, `IPointUnit`, `MetricsOptions`, `Refetch`, `TargetList`, `GenericMeasure`, `FloatTerm`, `MessagePriority`, `IStringFilter`, `CustomUIClass`, `EventItem`, `ILifecycle`, `MessageTag`, `IsBound`, `DBMethod`, `MutableMatrix33`, `IteratorWithOperators`, `FlatRow`, `IGuildMemberState`, `SatRec`, `PaymentTester`, `LeafCstNode`, `EmptyParametersGatherer`, `Hotkey`, `Auction`, `child_process.SpawnSyncReturns`, `FModel.DllFuncs`, `OwnPropsOfControl`, `MemoryNavigator`, `MDCDialogAdapter`, `SalesforceFormFields`, `UiService`, `RequestBodyParserOptions`, `ModelAndWeightsConfig`, `BespokeClient`, `Functor2`, `BroadcastOptions`, `PanelOptionsEditorBuilder`, `CreateRepositoryCommandInput`, `UserInfoOidc`, `Events.collisionend`, `digitalocean.Account`, `DAL.KEY_BACKSPACE`, `GetTranscriptCommandInput`, `IHeaderProps`, `MenuPopperProps`, `SafeStyle`, `MyNode`, `GeoContext`, `DeleteAnomalyDetectorCommandInput`, `DatabaseResultSet`, `Attrs`, `IdxTree`, `IHighlight`, `School`, `PreviewData`, `MultiEmitter`, `GraphicsShowOptions`, `ScriptTags`, `VisitorFunction`, `ParsedRule`, `Models.KeyValuePair`, `EmitNode`, `CreateSecurityConfigurationCommandInput`, `ProofNode`, `FormatResult`, `AzureSubscription`, `BalanceRequest`, `StateOptions`, `zmq.Dealer`, `RegisterDto`, `MerchantMenuOrderGoodsInfo`, `AttachmentMIMEType`, `IIteratorResult`, `FeatureConfig`, `FormValueType`, `ColumnBuilder`, `AuthInfo`, `INodeHealthStateChunk`, `AccountFacebook_VarsEntry`, `XroadConf`, `Functor1`, `PendingWrite`, `IExecuteResponsePromiseData`, `PlayerProp`, `requests.ListComputeCapacityReservationInstancesRequest`, `RoverStateReturn`, `StorageOptionsChangeData`, `EnvironmentManager`, `StrapiModel`, `BaseTheme`, `HyntaxToken`, `AreaLightInfo`, `MIDIAccess`, `Invitation`, `ISessionService`, `QuizServices`, `Cookies.Cookie`, `SiteMetadata`, `SharePublicKeyOutput`, `DNode`, `DataTypesInput.Struct1Struct`, `InspectPropertyReport`, `ContractPrincipal`, `DrawerContentComponentProps`, `MarkdownDocument`, `AwsEsdkKMSInterface`, `IDataTableColumn`, `FileWrapper`, `BlockArchiveLine`, `DataTransferItem`, `RecordColumn`, `ExpansionPanel`, `IUserSettings`, `Protocol.Network.RequestWillBeSentEvent`, `Ast`, `NumberKey`, `PlotCurveTypes`, `IAmazonServerGroupCommandResult`, `PlayerStat`, `ContractCallOverrides`, `GeoPolygon`, `Scan`, `Deck`, `ICommandContext`, `GetZoneRecordsResponse`, `OrderBookOrderDTO`, `Chainable`, `BookmarkItem`, `IAppSettings`, `QueryMessage`, `PairData`, `Datastore`, `LayoutOptions`, `TaskConfigurationScope`, `AttributeServiceOptions`, `TextElementStyle`, `IMoonData`, `DescribePipelineCommandInput`, `Gateway`, `FieldFormatsGetConfigFn`, `GraphQLTagTransformContext`, `Types.PostId`, `URLTokenizer`, `ArenaSelection`, `ScaleData`, `OnlineUserType`, `SettingGroup`, `ConfirmDialogDataModel`, `CallCompositePage`, `UpdateSiteCommandInput`, `ImportCacheRecord`, `FieldValidateConfig`, `LRUMap`, `Dex`, `LeagueStore`, `ReportingInternalSetup`, `NamespacedWireCommit`, `Analytics`, `ImageEnt`, `vscode.CustomDocument`, `RowId`, `RowRenderer`, `RestConfigurationMethodWithPath`, `requests.ListPluggableDatabasesRequest`, `BNString`, `DistributionData`, `GlobalInstructionType`, `OperateBucketParams`, `CanvasTextAlign`, `SBDraft2ExpressionModel`, `MapOptions`, `ListProjectsResponse`, `Http3PrioritisedElementNode`, `TransitionableCielchColor`, `Discriminated`, `ConsCell`, `Rectangle`, `MouseAction`, `BitcoinCashSignedTransaction`, `requests.ListAlarmsStatusRequest`, `GetOperationCommandInput`, `DeleteDatabaseCommandInput`, `AccessoryTypeExecuteResponse`, `SynthBindingName`, `CustomizePanelProps`, `Component`, `DocfyResult`, `ValidatorFn`, `WordInName`, `PredicateFn`, `Unsubscribe`, `OnSubscriptionDataOptions`, `RouteView`, `VuforiaSessionData`, `GatewaySession`, `IMatrixFunc`, `BaseScope`, `BTree`, `APIResponseCallback`, `ContainerItem`, `APIChannel`, `RayPlaneCollisionResult`, `DigitalCircuitDesigner`, `NodeWallet`, `GlobalSearchResult`, `OptionsType`, `AccessKeyRepository`, `IKeyRing`, `EntityCreate`, `IntegrationClass`, `BaseParams`, `ResizeHandle`, `PropSidebarItem`, `ISecurityGroup`, `Jimp`, `JhiEventManager`, `LCImpl`, `ModelFitArgs`, `IOverlayAnimationProps`, `SSRHelpers`, `ITokenProvider`, `SendCustomVerificationEmailCommandInput`, `CanvasContext`, `AppInfo`, `MonthData`, `ProjectListModel`, `ISequencedClient`, `Primitive`, `ControlFlowInfo`, `AddApplicationInputProcessingConfigurationCommandInput`, `TestMessagingService`, `SimpleAuthenticationDetailsProvider`, `ValidationData`, `CreateCategoryDto`, `ApiNotificationReceiver`, `ThemeIcon`, `OtokenInstance`, `utils.BigNumber`, `SecurityRating`, `Tagname`, `LifecycleFlags`, `ResolvedAxisOptions`, `RequestNode`, `SharedState`, `ResourceLink`, `ObjectSize`, `FirewallPolicy`, `ERC1155Mock`, `CategoryEntity`, `Venue`, `SocketChannelClient`, `FileUploadService`, `CheckPrivateLinkServiceVisibilityRequest`, `NameStyle`, `ChangeFlag`, `CFMLEngineName`, `WsService`, `IGameChara`, `WalletConnectProvider`, `FourSlash.TestState`, `Creature`, `fs.FileStorageClient`, `MultipleTypeDeclaration`, `SFOverrides`, `LogWidget`, `UnaryOperationNode`, `DataRecord`, `RecordItem`, `VProps`, `JPartition`, `ConvertedLoopState`, `ActionsInTestEnum`, `DeleteWorkRequestResponse`, `BridgeMessage`, `Types.NavigatorRoute`, `SelectionTree`, `FormPropertyFactory`, `GetShardIteratorCommandInput`, `MileStoneName`, `ActionTypeConfigType`, `OnRenderAvatarCallback`, `IAsyncEnumerable`, `ParsedDateData`, `HTMLVmMenuItemElement`, `RunData`, `ImagesContract`, `MockableFunctionCallCompiler`, `IChannelServices`, `UnicodeUtils.Direction`, `TProductCategory`, `Checker`, `HostClient`, `SessionKey`, `SynchrounousResult`, `IPodFile`, `ThemedStyledProps`, `TokenSet`, `TilePathParams`, `UIntArray`, `SVGStopElement`, `IApiExternalDocumentation`, `IComponentOptions`, `VisibilityVertexRectilinear`, `DeleteCustomVerificationEmailTemplateCommandInput`, `IBlobMetadataStore`, `MarkdownPostProcessorContext`, `IncomingHttpRequest`, `ToneAudioBuffers`, `Anim`, `BigIntConstructor`, `House`, `ChannelIdExists`, `HashMapEntry`, `SagaIteration`, `WithExtends`, `VerificationInitiateContext`, `BoxConstraints`, `React.Component`, `DialogLanguageModule`, `IndexPatternLayer`, `Stage`, `HoverResult`, `QueryTopicForHolder`, `AuthHelper`, `CoreService`, `TextElementGroupState`, `FocusMonitor`, `ListDedicatedIpPoolsCommandInput`, `APIResponseType`, `UiObject`, `GetDataSourceCommandInput`, `HandlerStack`, `PageG2`, `FILTERS.CUSTOM`, `ScreenReaderPartitionTableProps`, `SettingsPropsShared`, `NotebookPanel`, `PolicyService`, `QuerySuggestionGetFn`, `AutoImportResult`, `UIDialogRef`, `LinkedSearchProps`, `IntrospectionSchemaVersion`, `ConfigFlags`, `UnitStateMachine`, `OffscreenCanvas`, `ReplicatorQueries`, `OasDocument`, `TaskClient`, `WithEmptyEnum`, `ReadRequest`, `TaskInputs`, `UInt`, `SearchAllIamPoliciesRequest`, `UseGenerateGQtyOptions`, `DescribeSObjectResult`, `JSONSearchParams`, `Yendor.IPersister`, `ILanguageTemplate`, `Meta.Window`, `RawDatum`, `Watcher`, `d.ScreenshotConnector`, `ComponentLoader`, `Snippets`, `PythonPreviewManager`, `GrantInterface`, `IsAny`, `UpdateAvailableEvent`, `XmlEnumsCommandInput`, `ChannelStoredData`, `MockResource`, `DiezComponent`, `SyntaxErrorConstructor`, `BankAccount`, `cytoscape.NodeSingular`, `SizedBox`, `RuntimeBot`, `ImageResolution`, `DecipherCCM`, `SchematicTestRunner`, `Electron.Session`, `Controller`, `RefreshableView`, `Dialogic.InstanceEvent`, `IntrospectionOptions`, `RecordInput`, `MediatorService`, `DeepReadonly`, `TimeRangeLimit`, `ex.Input.KeyEvent`, `IContentType`, `Analyser`, `QRProvisioningInformation`, `DialogflowConversation`, `VertexDescriptor`, `RotationalSweep`, `RemoteTrackInfo`, `ParameterCondition`, `DataSourceOptions`, `IGitService`, `IConnectableBinding`, `IMemoryTable`, `PiPrimitiveProperty`, `NotificationCCReport`, `TaskIDPath`, `TypeOfExpression`, `DiskEncryptionSet`, `XPConnectService`, `O.Option`, `QueryEngineRequest`, `BootOptions`, `TerraformBaseCommandInitializer`, `SystemDomainApi`, `SnippetNode`, `Apollo.QueryHookOptions`, `GetFolderCommandInput`, `I18nMutateOpCodes`, `IGenericDeclaration`, `OfIterable`, `SpotifyWebApiJs`, `RSSSource`, `TUserAccountProvider`, `JoinBuilder`, `AuthenticationClient`, `OperationResult`, `AccessModifier`, `MigrateReset`, `TextElementBuilder`, `ImageAsset`, `CacheManagerContract`, `DescribeSessionsCommandInput`, `CompleteMultipartUploadCommandInput`, `DocumentClient.QueryInput`, `MigrationParams`, `ScrollSpiedElementGroup`, `RoastingMachine`, `StackContext`, `EditorOpenerOptions`, `PartialDeep`, `Ancestor`, `TrackedImportAs`, `PixelFormat`, `ChartDimensions`, `DateTimeModel`, `MidiValue`, `GeneratorProcess`, `QueueSendMessageResponse`, `SeriesIdentifier`, `ProcessLock`, `PrismaClientDMMF.Document`, `ClassProvider`, `PDFButton`, `FileSyntax`, `ISlickRange`, `Background`, `ProtocolConnection`, `FeatureVersion`, `CloudWatchLogs`, `ToastrManager`, `IndexLiteral`, `ChildWindowLocationProps`, `EMSSettings`, `PiValidatorDef`, `typescript.SourceFile`, `FileEvent`, `OperatorPrecedence`, `XYAxis`, `ManifestContext`, `NavigationType`, `DOn`, `State.FetchStatus`, `EntityIdentity`, `ValidatorSet`, `PagedRequestDto`, `Gadget`, `EventHandlerFn`, `ModalPage`, `PrecommitMessage`, `TestContext`, `PullRequest`, `ModeController`, `ClockHand`, `FsObj`, `CanvazNode`, `DomPortalOutlet`, `Problem`, `EntityOptions`, `AccessTokenScopeValidator`, `Scatterplot`, `Stopwatch`, `LinearlyReferencedFromToLocationProps`, `Exporter`, `ts.SyntaxKind`, `UIProps`, `ImportAliasData`, `IResolvedConfig`, `ISDK`, `ChatBaseSelectorProps`, `BrowserInfo`, `TexMatrixMode`, `DeclarationBlock`, `SecureNote`, `GrantType`, `MigrationOpenSearchClient`, `EulerRotation`, `DbSmartContract`, `CurrencyPair`, `ScanDirection`, `IndicesArray`, `Microgrammar`, `ProtocolError`, `ValidateResponse`, `CalibrationPanelProps`, `KeyboardNavigationHandler`, `MdcRadioGroup`, `DebugProtocol.EvaluateArguments`, `KeyModel`, `IAreaItemLevel`, `SfdxCommandlet`, `CachingLogViewer`, `IssueOptions`, `ITemplatizedLayout`, `MediaStreamTrack`, `VoiceFocusConfig`, `ExtendedArtifact`, `RpcRemoteProxyValue`, `AncestorDefs`, `LSA`, `CreateConnectionDTO`, `RouterReducerState`, `Decoded`, `CompilerBuildResults`, `HTMLParser`, `IReferenceType`, `RoomData`, `Fact`, `EncryptionConfiguration`, `I18NLocale`, `CollectionChangedEventArgs`, `AutoScaling`, `ProcedureRecord`, `DeleteProjectCommand`, `Omit`, `IWorker`, `DocumentInput`, `MealTicketRemoval`, `GridConfig`, `IMessageHandler`, `HttpRequestWithLabelsAndTimestampFormatCommandInput`, `IEditorMouseEvent`, `NamedItem`, `DashboardCellView`, `Cropper`, `MyMap`, `VmNetworkDetails`, `AnalysisResponse`, `TCssTemplate`, `DeleteBotCommandInput`, `ICorrelationTableEntry`, `ANC`, `DatatableVisualizationState`, `StandardContracts`, `VertexTypeStore`, `t`, `RedocNormalizedOptions`, `QueriesResults`, `TrackedMap`, `HarmonyAddress`, `FlowNarrowForPattern`, `NVMParser`, `StateBottomSheet`, `XHROptions`, `VideoProps`, `IResolvedIDs`, `DidDocumentBuilder`, `LexicalScope`, `Vin`, `ProsemirrorNode`, `TypeContent`, `Locator`, `BalmConfig`, `PerformAction`, `SearchProfilesCommandInput`, `PipeTransform`, `CreateClusterResponse`, `BatchPutMessageCommandInput`, `requests.ListDrgRouteDistributionStatementsRequest`, `GQLType`, `AthleteSnapshotModel`, `IssueCommentState`, `Cursor`, `CurrentRequest`, `BlockDeviceMapping`, `BtcUnlock`, `PythonVersion`, `BeInspireTreeNode`, `RoleResolvable`, `RemoveFromGlobalClusterCommandInput`, `ABIReturn`, `ProgressType`, `BrowserBehavior`, `ChannelInfo`, `ISelection`, `ITdDataTableSortChangeEvent`, `PartialC`, `Events.pointerdragmove`, `DeleteEnvironmentCommandInput`, `KernelMessage.IMessage`, `AlertsClient`, `MerchantGamePrizeEntity`, `WidgetControl`, `LRU.Options`, `ex.Actor`, `AssetsList`, `HeadingNode`, `AssetData`, `BoxUnit`, `Convert`, `ShaderId`, `GetPostsResponse`, `IngressSecurityRule`, `NonlocalNode`, `SavedObjectSaveOpts`, `InjectionError`, `Rebuilder`, `BlueprintContainer`, `AsyncThunks`, `TaggedLiteral`, `DatatableRow`, `ODataNavigationPropertyResource`, `DescribeTagsCommandOutput`, `types.TracerBase`, `WorkflowOutputParameterModel`, `IEquipmentSharingPolicy`, `ElementDescriptor`, `FeedItem`, `Hapi.Request`, `MinecraftLocation`, `SetShape`, `LexoRankBucket`, `TileImageSize`, `ChokidarEvents`, `GridState`, `InfoDialogService`, `BottomNavigationTabBase`, `RafCallback`, `DoorLockLoggingCCRecordGet`, `InstanceTargetWithMetadata`, `RemoteRequest`, `ISlideObject`, `Regl`, `ClientPlugin`, `CancellablePromiseLike`, `RComment`, `NumberW`, `SvelteDocumentSnapshot`, `EntityFactory`, `WalletResult`, `AnnotationWriter`, `AnalyzerService`, `TextLine`, `LimitToPipe`, `JSONType`, `SearchScope`, `TokenIdentifier`, `CategoricalAggregate`, `DragResult`, `ILiteralExpectation`, `InteractiveConfig`, `Schedule`, `ICloudTimerList`, `SFU`, `AsyncHook`, `DeleteDatasetCommandOutput`, `TrailImage`, `MIRBody`, `IObjectWithKey`, `TabHandler`, `WebCryptoDecryptionMaterial`, `PTG`, `TemplateElement`, `LongTermRetentionPolicy`, `BatchConfig`, `OnLoadParams`, `AlainSTConfig`, `IToastCache`, `IPanesState`, `TeamModel`, `ShallowStateItem`, `CSSProperty`, `BarSeriesStyle`, `ProjectContainer`, `TypeValue`, `Movie`, `JSDocMethodBodyCtx`, `CompletionProvider`, `PageObjectConstructor`, `PluginVersionResource`, `PeerTubeServer`, `RectangleEditOptions`, `MarkerElement`, `ExprNode`, `SerializableError`, `MockConfig`, `KudosTokenFactoryService`, `ImportMap`, `ContextShape`, `MaterialVariant`, `TLE.NumberValue`, `AccountsOperationStep`, `IVirtualRepeater`, `UseMetaState`, `GenericError`, `SimpleContext`, `AwsServiceFactory`, `PairingTypes.Proposal`, `AdaptElementOrNull`, `AnalysisRequest`, `TickOptions`, `IDateUtils`, `ResultTree`, `RNNCellForTest`, `MongooseFilterQuery`, `GenericStyle`, `IServiceWizardContext`, `TrackedCooldown`, `WebDriver2`, `TargetGroupAttachment`, `MyClassWithReturnExpression`, `FactoryDatabase`, `OptionalDefaults`, `GX.TevBias`, `LockFile`, `DialogItemValue`, `FragmentMap`, `SelectorList`, `MalSymbol`, `TopologyObjectId`, `IEmployeeJobPost`, `IPropertyGridEditor`, `FasterqLineModel`, `ConvLSTM2D`, `AtToken`, `PostContentDocumentRequest`, `NgrxJsonApiStore`, `LiveDatabase`, `XPCOM.nsIJSID`, `ReactiveVar`, `FlexItemStyleProps`, `ElectricRailMovingPoint`, `MonitorRule`, `ApiRequest`, `AnnotationPointType`, `AbstractGraph`, `def.Vec2`, `LucidRow`, `SceneModel`, `MilestoneDataPoint`, `RightResolvable`, `RuleType`, `WeightsManifestConfig`, `IosDependency`, `NzDebugConfig`, `JPAExTexBlock`, `ts.NodeFactory`, `k8sutils.KubeClient`, `providers.WebProvider`, `Field.PatchResult`, `TransactResult`, `RequestType2`, `CoreOptions`, `SingleSigSpendingCondition`, `ElementProperties`, `http.RequestOptions`, `SingletonList`, `X12Interchange`, `TSESTree.Statement`, `CacheManager`, `RxSlpStream`, `PivotItem`, `VersionPolicy`, `DriveNumber`, `VcalAttendeeProperty`, `Until`, `INestApplicationContext`, `BeaconBlockHeader`, `AuthData`, `TestResolverDTO`, `FabricGatewayConnectionManager`, `UserRegisterResource`, `OutPoint`, `IFluidCodeDetails`, `ApolloMutationElement`, `DaffOrderReducerState`, `PromiseExtended`, `SWRInfiniteConfiguration`, `ConnectionDTO`, `SelectBox`, `UserToken`, `AxisOrder`, `DejaTile`, `PropMap`, `TView`, `MenuConfig`, `TkeyStoreItemType`, `ToggleCurrentlyOpened`, `BlockFile`, `BaseFullNodeDeploymentConfig`, `PouchDB.Core.Document`, `OfflineContext`, `ContractTxQueryResult`, `Printer`, `LinkProps`, `PackageInfo`, `InferenceContext`, `CodeCommit`, `SKFrame`, `CSSService`, `Bookmark`, `PayloadBundle`, `Serie`, `TypeormRawSetting`, `IGrammar`, `RelationsOpts`, `ComponentsObject`, `ThyTreeSelectNode`, `StaticLayoutProps`, `IBidirectionalIterator`, `ILicense`, `CliCommand`, `StackCollection`, `EvCallData`, `GetObjectOutput`, `NumId`, `OidcSession`, `VideoState`, `JSDocTemplateTag`, `BarcodeInfo`, `K8sResource`, `UserPoolClient`, `inversify.Container`, `Grant`, `JSONRPCClient`, `STEP_RECORDS`, `NodeMessage`, `OutputTargetDistTypes`, `Pilotwings64FSFile`, `DeleteSecurityProfileCommandInput`, `ImpressionSender`, `RefObject`, `NDArrayMath`, `DeleteTagsRequest`, `CausalTree`, `SizeLimitChecker`, `PathStartCoordinates`, `MultisigTransaction`, `HttpRequestWithLabelsCommandInput`, `XMLDocument`, `DealStage`, `SdkAudioMetadataFrame`, `ISong`, `HandleOutput`, `TxnIdString`, `FaviconOptions`, `ForeignKeySpec`, `ListField`, `P2PInternalState`, `SponsorsResponseNode`, `Trader`, `IVirtualPosition`, `CapabilitiesSwitcher`, `KanbanBoard`, `HashFunction`, `RadixAccount`, `GLsizeiptr`, `FeatureGroup`, `Finder`, `ApplicationEntity`, `EquipmentSharing`, `EntitySubject`, `CodeModel`, `IMedal`, `TargetLocation`, `ChangeSetQuery`, `IRecurringExpenseDeleteInput`, `ITemplateDiff`, `AVPlaybackStatus`, `RenderTreeFrame`, `ObjectWithType`, `CamlQuery`, `IComponentDesc`, `FrameOverTime`, `BlobLeaseAdapter`, `CallMessage`, `UserRefVO`, `FloatArray`, `INodeCredentialDescription`, `BaselineFileContent`, `IApiRequestBody`, `NSApplicator`, `VdmMapping`, `DispatchType`, `requests.ListModelDeploymentShapesRequest`, `DeclineInvitationsCommandInput`, `ContextStore`, `AnimationData`, `ErrorWidget`, `TEUopType`, `SyncCommandWithOps`, `NginxDirective`, `d.CompilerCtx`, `ToolbarChildrenProps`, `Recorded`, `LeaguePriceDetails`, `DistanceFn`, `MockStoreAction`, `TaskCommand`, `HttpUrlGenerator`, `KleisliIO`, `DecoratedModelElement`, `SecretVerificationRequest`, `DisjunctionSearchQuery`, `TagsService`, `ResourceNotFoundException`, `Ray`, `ISceneData`, `ITranslationResult`, `T17`, `jest.SnapshotSerializerPlugin`, `ProjectService`, `DataConverter`, `FileChangeType`, `ChartBase`, `HTMLVisualMediaElement`, `GenerationNum`, `HttpClientOptions`, `MouseUpAction`, `ParameterContext`, `ArtifactStore`, `LastColumnPadCalculator`, `PartialBindingWithMeta`, `JsonDocsProp`, `FieldValidation`, `J3DModelInstanceSimple`, `ScannedReference`, `HasInfrastructure`, `ShaderAssetPub`, `TableServer`, `ValueSetterParams`, `SmartStartProvisioningEntry`, `ExternalRouteDeps`, `Neo4jConfig`, `AbstractCartProxy`, `KeyValueChanges`, `_ResourceConstantSansEnergy`, `ThemePrepared`, `RPGGame`, `requests.ListFileSystemsRequest`, `ResetPasswordAccountsRequestMessage`, `TagRenderingConfigJson`, `Bamboo`, `GLBuffer`, `KeyIndexMap`, `CreateHsmCommandInput`, `DownloadTarget`, `CircuitBreaker`, `SpeedtestResult`, `IBookmarks`, `StreamResetOutgoingParam`, `Side`, `App.windows.window.IOverlay`, `DebounceSettings`, `AuxResult`, `ClassIteratorFlags`, `PotentialPartnerActions`, `CreateProjectCommandInput`, `CoreExtInfo`, `ScopeState`, `PresentationTreeNodeLoaderProps`, `ItemDescription`, `VisTypeOptions`, `PanelPlacementMethod`, `ToolRunner`, `ColumnDescription`, `IWorkerModel`, `Fp2`, `BaseOptions`, `DOMError`, `InsertNodeOptions`, `HandlerProps`, `ManagedInstance`, `FormItemProps`, `HorizontalTable`, `IFieldType`, `IUser`, `NavigationTransition`, `ODataQueryOptions`, `KeylistUpdateMessage`, `TextDocumentIdentifier`, `VisTypeAlias`, `CoursesCounter`, `VirtualRepeat`, `Violation`, `LastfmTrack`, `ModifiersArray`, `TextArea`, `IQuestion`, `DependencyTracker`, `RecordedDirInfo`, `ts.BindingName`, `THandler`, `ESLMediaRuleList`, `RtcpReceiverInfo`, `RouteOpt`, `OrExpression`, `DomainInfoCache`, `MinionsController`, `Bzl`, `ActionContext`, `Conf`, `PipeCallback`, `ThyAbstractOverlayOptions`, `RestContext`, `HitSensorType`, `PropValidators`, `BoxColliderShape`, `TextFieldProps`, `GetUpdatedConfigParams`, `Path`, `CancellationTokenRegistration`, `RotationOrder`, `ActivatedRouteStub`, `Node_Annotation`, `ALBEvent`, `GetThreadResponseType`, `AbbreviationNode`, `MIRBasicBlock`, `RegisterData`, `ISecurityToken`, `SplashScreen`, `BackendType`, `ProductAnalyticalResult`, `firestore.GetOptions`, `UserTokenPolicy`, `GroundPlane`, `CharacterMaterial`, `NOTIFICATIONS_STATUS`, `ContractCallBuilder`, `QueryType`, `BlockchainLink`, `peerconnection.DataChannel`, `TestRequestResponse`, `visuals.Coord`, `GetClientFunction`, `ConfigSetName`, `CaseDesc`, `EncryptedSavedObjectsPluginSetup`, `IDeployState`, `FederatedAdapterOpts`, `Handle`, `Flag.Parser`, `SelectionNode`, `Streak`, `PReLULayerArgs`, `AdtLock`, `RecentCompletionInfo`, `PlayerController`, `IOProps`, `InternalCase`, `SQLiteDatabase`, `Toucan`, `Colony`, `three.Mesh`, `HTMLVmPlayerElement`, `SourceStream`, `ExportSummaryFn`, `IOutputs`, `DiagnosticReporter`, `TenantId`, `FlowElement`, `ComponentType`, `RunSegment`, `GitRepo`, `requests.ListDatabasesRequest`, `QueryFunction`, `JSZip`, `UnsignedTransaction`, `KC_PrismHit`, `BuiltLogic`, `Prefetch`, `DatabaseBundle`, `IAboutService`, `InjectedAccountWithMeta`, `RoutingIntent`, `ObjectBindingOrAssignmentPattern`, `PutReportDefinitionCommandInput`, `ListSession`, `AuthHeaderProcessor`, `TestSolo`, `EDateSort`, `V1CommandLineToolModel`, `Enemy`, `DirectiveTransform`, `ProgressEvent`, `JSDocFunctionType`, `GraphQLConfig`, `TupleNode`, `PaymentOptions`, `ts.Token`, `CursorModel`, `QueueEntry`, `QueryExecutor`, `MultiSliderProps`, `CausalRepoObject`, `UserOptions`, `ShEnv`, `HistoryEvent`, `IRECProductFilter`, `IBinaryData`, `IPackageRegistryEntry`, `MockGuild`, `PartyPresenceEvent`, `TemplateAnalysis`, `ActorLightInfo`, `Member`, `protos.common.SignaturePolicy.NOutOf`, `IInboxMessage`, `Signature`, `SignDocWrapper`, `TenancyEntityOptions`, `SIOPRequestCall`, `CommonVersionsConfiguration`, `EvaluatedNode`, `ObjectFactory`, `KeyCompoundSelector`, `OctokitType`, `RemoteNodeSet`, `Reflector`, `VariantObject`, `CompressedPatch`, `GenericNack`, `PaletteDefinition`, `CliApiObject`, `SpriteStyle`, `Tournament.TournamentConfigsBase`, `TestERC20`, `CookieEchoChunk`, `CardRenderSymbol`, `QueryTreeNode`, `IDownloadFile`, `IAccountsState`, `Error`, `Feedback`, `IndexedTechnique`, `ModelShape`, `SecurityGroupRulePorts`, `TDest`, `MeetingSessionConfiguration`, `AParentInterface`, `Extensions`, `GQLQuery`, `OverlayRef`, `System_String`, `IField`, `Doc`, `Dimensions`, `NestedStructuresCommandInput`, `PointerAbstraction`, `ScriptType`, `UpdateIPSetCommandInput`, `Promisify`, `ModelArtifactsInfo`, `MapObjectAdapterParams`, `DiscoverInputSchemaCommandInput`, `SheetContainer`, `CannonColliderShape`, `Mnemonic`, `OnConflictBuilder`, `Credentials`, `TextMeasure`, `MapComponent`, `CloudDevice`, `Balance`, `DownloadRequest`, `CombatPlayerComponent`, `DebugEditorModel`, `StatelessComponent`, `MicroframeworkLoader`, `CommitIdentity`, `IInfectionOptions`, `HydrateAnchorElement`, `TQuery`, `AnimationArgs`, `TEnumValue`, `DebugProtocol.PauseArguments`, `InterpolationFunction`, `PedComponent`, `THREE.Vector2`, `LintOptions`, `DescribeGroupCommandInput`, `DirectiveDefinitionNode`, `NodePhase`, `UserSettingsService`, `BlockedHit`, `IRootAction`, `EnvId`, `Signed`, `PictureGroup`, `DeleteRegistryCommandInput`, `AngularHttpError`, `SymbolMap`, `VmixConfiguration`, `MutationPayload`, `ChatType`, `BezierPoint`, `CustomerVm`, `MarkOperation`, `SdkStreamDescriptor`, `IDataFilterValueInternal`, `FMAT_RenderInfo`, `DestinationOptions`, `PermissionMetadata`, `RARCDir`, `NamingStrategy`, `TestFixture`, `FeedDict`, `AudioConfig`, `ArgumentDefinition`, `Anime`, `CountOptions`, `SerializedAction`, `BootstrappedSingleSpaAngularOptions`, `CallappConfig`, `SnippetsProvider`, `TrackingService`, `BoxrecBasic`, `CreateAssetCommandInput`, `PadplusRoomPayload`, `DispatchedPayload`, `IFormGroup`, `IDeliveryClientConfig`, `OscillatorType`, `Root`, `TimeoutOptions`, `IntersectionObserverEntry`, `StreamInterface`, `OptionLike`, `Character`, `Document`, `FrameNode`, `ConflictingNamesToUnusedNames`, `StackingContext`, `ResponseStream`, `TreeExtNode`, `JoinType`, `ChannelTypeEnum`, `RouteQuote`, `ListJobShapesRequest`, `CurriedFunction3`, `SystemInfo`, `SessionState`, `DCons`, `CheckConflictsParams`, `HttpClientService`, `USSTree`, `Texture`, `THREE.Intersection`, `ICachedResourceMetadata`, `EmptyIterable`, `RegistryService`, `DetachedSequenceId`, `VKeyedCollection`, `TitleTagService`, `MomentumOptimizer`, `ReplayDataMediator`, `BrowserType`, `FlatIndex`, `RTCCertificate`, `TimerHandler`, `BaseCoin`, `RegisterServerOptions`, `IndexRangeScanStep`, `RedundancyConfig`, `SecurityProviders`, `NSMutableArray`, `ModuleReference`, `ApiHttpService`, `CalendarState`, `IDataPerList`, `MessageImage`, `TextObject`, `Gui.VPanel`, `Maybe`, `MathjsBigNumber`, `GeoPoint`, `ParamAssignmentInfo`, `IMode`, `VdmProperty`, `ModuleOptionsWithValidateTrue`, `ToJsonProperties`, `Dynamics`, `AbbreviationInfo`, `RichTextComponents`, `NavigableSet`, `MatrixClient`, `JRPCRequest`, `IDanmaTrackInfo`, `ExpressionAstFunction`, `CheckpointWithHex`, `SetType`, `AsyncActionProcessingOptions`, `GLRenderer`, `ApiGatewayLambdaEvent`, `ContainerBinding`, `INodeStatus`, `MutationResult`, `ChatServerConnection`, `NavigationContainerRef`, `UniqueID`, `IDBDatabase`, `TsmOptions`, `d.OutputTargetHydrate`, `SwUpdate`, `AxiosHttpClient`, `float64`, `Community`, `VerificationContext`, `RX.Types.SyntheticEvent`, `SortValue`, `ItemShape`, `ProjectIdentifier`, `StartTagToken`, `HTMLCollection`, `CanaryMetricConfig`, `IUserAchievement`, `RecordRow`, `HoverFeedbackAction`, `CommonFile`, `TestHelpers`, `TargetedAnimation`, `MPRandGauss`, `Lease`, `OrderSide`, `VirtualEndpoint`, `IPaginationOptions`, `ClusterRole`, `WaitStrategy`, `SessionCache`, `StorageModuleAsyncOptions`, `EmbeddableFactoryDefinition`, `IncorrectFieldTypes`, `PageDTO`, `FFTProgram`, `IControlData`, `ChunkRange`, `SavedObjectsBulkUpdateOptions`, `JsonWebSignatureToken`, `ArgVal`, `ResponderModeTypes`, `GraphicsGrouping`, `ImageSegmenterOptions`, `TodosST`, `ListProtectedResourcesCommandInput`, `ChatContext`, `TypeEmitOptions`, `AstParsingResult`, `IKeyboardDefinitionDocument`, `SceneRenderContext`, `BackgroundStyle`, `MinifyOptions`, `RegistryContract`, `PerfTools`, `HTLC`, `DirtiableElement`, `RenderTag`, `StringWithEscapedBlocks`, `DeleteHsmCommandInput`, `TrophySubmission`, `apid.AddRuleOption`, `ListKeysCommandInput`, `requests.ListProjectsRequest`, `ColorSchemaOptionsProps`, `SignedCanonicalOrder`, `vscode.DecorationRenderOptions`, `DatatableArgs`, `AnalyzeResult`, `SubExpr`, `ConcreteSourceProvider`, `PortProvider`, `ConfirmDialogService`, `DistanceQueryInterface`, `NumberParams`, `Mdast.Root`, `ResponseReaction`, `ExportRecord`, `SearchExpression`, `HowToPay`, `instantiation.IConstructorSignature5`, `Templateable`, `DSpaceSerializer`, `Propagation`, `UIBrewStorage`, `NumberConstructor`, `SuspenseListRegistryItem`, `Node_Const`, `ScopedProps`, `EncryptedData`, `GfxRenderInst`, `DebugBreakpoint`, `InventoryInteractionService`, `HorizontalPlacement`, `SubscriptionEntry`, `CorporationCard`, `Fraction`, `DtlsClient`, `ElevationProvider`, `ImageStretchType`, `MemoryDebe`, `FunctionConstructor`, `HierarchyFacts`, `ElTableStoreStates`, `server.DataLimit`, `GlobalReplicationConfig`, `InnerClientState`, `ISuggestion`, `y`, `IKibanaMigrator`, `TransactionView`, `JobCommand`, `StepFunction`, `ListFirewallPoliciesCommandInput`, `TimelineItem`, `WebGLExtensionEnum`, `TestFn`, `DejaSelectComponent`, `TestDeployRetrieve`, `LookupSorter`, `ChannelList`, `GauzyCloudService`, `ExpressionRegexBuilder`, `IFluidDataStoreContext`, `SelectorMap`, `SetDefaultPolicyVersionCommandInput`, `ReleaseChannel`, `ApiClientRequest`, `IWebhookData`, `MatMenuItem`, `GenerativeToken`, `Submitter`, `WorkspaceSummary`, `EC2Client`, `Kernel.IKernelConnection`, `FocusKeyManager`, `RootHex`, `React.HTMLProps`, `DebugProtocol.StepInArguments`, `ForwardingSchema`, `SafeSelector`, `SceneGraphNode`, `ScratchOrg`, `MongoClientOptions`, `https.ServerOptions`, `AbiEntry`, `ValueMetadataAny`, `AuthorizationNotFoundFault`, `FilterDescriptor`, `SignedMultiSigTokenTransferOptions`, `HandlerParamOptions`, `UserTokenAccountMap`, `Compression`, `ModOutput`, `LogsConfiguration`, `pe`, `BuildPipelineParams`, `TypeAST`, `ResourceKeyList`, `webpack.loader.LoaderContext`, `ZimCreator`, `NotFoundError`, `requests.ListImagesRequest`, `EnergyAmounts`, `search.SearchState`, `VisualizationsPlugin`, `CharacterClassElement`, `CommandBase`, `DeviceSize`, `ColorFilter`, `UpdateFolderCommandInput`, `ts.JSDocTag`, `ISortOption`, `EventToAsyncUnHandler`, `SuggestionWithDetails`, `IClaimData`, `ShValue`, `DisableOrganizationAdminAccountCommandInput`, `InlineResolveOptions`, `MetadataPackageVersion`, `WeakEvent`, `ParsedSchema`, `URLSearchParamsInit`, `Trap`, `OctoServerConnectionDetails`, `VisSavedObject`, `ArticleType`, `SendMessageData`, `ServiceDefinitionPaths`, `PackageListItem`, `DomainDeliverabilityTrackingOption`, `Progress`, `ListInvitationsCommandInput`, `WhereBuilder`, `CreateAccountsValidationResult`, `SpeechSynthesisEvent`, `TestLogger`, `IsolatedAction`, `ColumnDifference`, `PluginSettings`, `TreeviewNode`, `jest.MatcherUtils`, `CSSOutput`, `Regions`, `PullRequestState`, `XMLHttpRequestResponseType`, `PointS`, `TransferArgs`, `ColumnSubscription`, `CheckPrivilegesOptions`, `CombinedDataTransformer`, `firestore.DocumentSnapshot`, `StackParams`, `Json.Token`, `IGetUserInvitationOptions`, `MockCSSRule`, `ProseNodeMap`, `t.JSXElement`, `ExpansionModule`, `GlimmerComponent`, `EditHistoryCommit`, `UnsupportedSyntax`, `StandardMaterial`, `HttpOptions`, `DescribeOrganizationConfigurationCommandInput`, `GrpcEventEmitter`, `AuthAction`, `EventObject`, `PersistentCharacter`, `CreateSnapshotCommandInput`, `BigLRUMap`, `RootState`, `ExplorerState`, `TimeTicksInfoObject`, `AwaitNode`, `___JSE_XLSX___Node`, `SanityTestData`, `OnScroll`, `MenuService`, `ContentActionRef`, `BaseSettings`, `LazyLight`, `SchematicContext`, `EqualityComparer`, `CreateOptions`, `PluginInsertActionPayload`, `PaymentDataRequest`, `IDiff`, `AnimationTransitionMetadata`, `Actors.Actor`, `AsyncCommandWithOps`, `PipelineVersion`, `ZoneNode`, `INativeTagMap`, `ISQLScriptSegment`, `InsertBuilder`, `DaffStateError`, `BoundsOffsets`, `TestElementDrivesElement`, `FlexStyleProps`, `ACCategory`, `Oas3Rule`, `ContextAwareLogger`, `RemoteStoreRoom`, `DescribeAddonCommandInput`, `Sidekick`, `TToken`, `CollectionViewLayout`, `SequentialArgs`, `ContextTransformer`, `TestChannelArgs`, `UITraitCollection`, `ProxyConfig`, `TFLiteDataType`, `Stereotype`, `ObsoleteOptions`, `CreateDomainNameCommandInput`, `GoConditionalMove`, `vscode.Hover`, `CookieSettingsProps`, `PDFState`, `ExecutionPureTransitions`, `OrderStruct`, `TSQueryOptions`, `QueryResultRowTypeSummary`, `NodeURL.URL`, `SearchState`, `DataTableService`, `ShardingInstance`, `AaiMessage`, `DetailedStackParameter`, `TransactionEvent`, `NuxtConfig`, `PieVisualizationState`, `ServiceCollection`, `Statements`, `PluginConstructor`, `TSigner`, `CreateUserService`, `GoogleUser`, `PadchatContactPayload`, `DiscoveredClass`, `immutable.Set`, `VariablePart`, `CallHandler`, `ReducerWithInitialState`, `VideoStreamRenderer`, `Linters`, `ContentProvider`, `JStretch`, `IHookCallbackContext`, `RebootBrokerCommandInput`, `TitleTagData`, `GroupAction`, `AnimatorDuration`, `ProgramOptionsList`, `ParserFnWithCtx`, `AnimationProps`, `EasingFunction`, `UsageInfoHoverInfo`, `TypingIndicatorStylesProps`, `KarnaughMapProps`, `ConnectionHandler`, `EvictReasonType`, `LayerService`, `InputBit`, `RightHandSideEntry`, `ScopedStateManager`, `SVGRect`, `CesiumProperties`, `ListFormat`, `ClientGoalState`, `Keyboard`, `SimpleTest`, `NumberInfo`, `PlatformUtilsService`, `W3`, `HandlerOptions`, `SupportCode`, `SerializedEntityNameAsExpression`, `AST.Root`, `ObserverNameHolder`, `TopologyService`, `DraggedWidgetManagerProps`, `Psbt`, `InspectorOptions`, `TelemetryServiceConstructor`, `MetadataMap`, `InferTypeNode`, `IDocumentMergeConflict`, `ProjectViewModel`, `Basket`, `ValueGetter`, `VSCServerManagerBase`, `TFields`, `ImportDefaultInterface`, `IntersectionType`, `GaiaHubConfig`, `SQLResultSet`, `NavigationDirection`, `ListServicesRequest`, `HdBitcoinPayments`, `MetricAggType`, `GlossyMaterial`, `ComponentTypeOrTemplateRef`, `ComponentHandler`, `BaselineEvaluation`, `TRPCLink`, `IssueWithStatus`, `VirtualContestItem`, `XmlEmptyMapsCommandInput`, `WordcloudSeries.WordcloudFieldObject`, `IGameData`, `Commutator`, `CheckedObserver`, `Team`, `Provide`, `PuppeteerScreenshotOptions`, `OperatingSystem.Windows`, `StyleHelpers.QuoteInput`, `UserBuilder`, `NavigationTree`, `ChartConfiguration`, `DrawIOUMLDiagram`, `BlockchainGatewayExplorerProvider`, `d.ComponentConstructorWatchers`, `UserStatsState`, `TransactionFunction`, `ReactDataGridFilter`, `OverlayConfig`, `XPCOM.nsXPCComponents_Interfaces`, `AttributionsToResources`, `MessageType`, `IMongoResource`, `StoreConfig`, `IListFunctionOptions`, `NodeParameterValue`, `PersonState`, `project.Project`, `RouteHandler`, `IDs`, `OrderByNode`, `dKy_tevstr_c`, `MentionInputorElement`, `IChannelManager`, `compare`, `SettingDictionary`, `IColumnIndices`, `CSharpFieldType`, `Random`, `GetMetaDataFunction`, `RosettaOperation`, `TinderLike.Props`, `ObjectASTNode`, `ApiViewerTab`, `GraphQLSchemaNormalizedConfig`, `PreferenceInspection`, `BooleanValidator`, `ASStatement`, `AlertWrapperProps`, `PureTransitions`, `FooterProps`, `SpectatorHost`, `AtomTweeningNumberElement`, `IdentityProviderConfig`, `IonContent`, `ClipPrimitive`, `requests.ListManagedInstanceGroupsRequest`, `PlaneType`, `PAT0_TexData`, `FileCache`, `EngineMiddleware`, `FocusOrigin`, `void`, `BlokContainer`, `InternalPropertyObserver`, `LanguageOptions`, `Recommendation`, `PartitionFilter`, `CreateEmailIdentityCommandInput`, `InsightObject`, `Pong`, `MapMarker`, `GlobalState`, `SubPredArg`, `QueryFn`, `GraphQLFieldResolver`, `VisualizeAppProps`, `LoadMetricInformation`, `NavigatorRoute`, `NotebookEvents`, `SubnetAlreadyInUse`, `DiscordMockContext`, `DropHandler`, `RuleOptions`, `SolutionStackProps`, `FlashArguments`, `MinAdjacencyListArray`, `_NotificationConfig`, `IDocumentMessage`, `EnergyMap`, `ILanguageSyntax`, `GravityType`, `ExtraFieldDetail`, `ContainerAdapterClient`, `SMA`, `MDCProgressView`, `StubXhr`, `Var`, `PlaintextMessage`, `HandleResult`, `PrepareEnvVariablesProps`, `GCPAuthOptions`, `CodeGeneratorContext`, `ConnDataType`, `UpptimeConfig`, `CallExpression`, `ControllerMetadata`, `CursorModelConfig`, `X509CertificateSupplier`, `ComposeSubscriber`, `ValidationRule`, `DocumentSnapshotCallback`, `MyAppProps`, `CloseReason`, `promise.Promise`, `AppNotificationManager`, `ToneBufferSource`, `TransitionOptions`, `ProgressAtDayModel`, `LevelActionTypes`, `StateSelectors`, `IDeployContext`, `AlignSelf`, `EntityDeserializer`, `PromiseRejectionEvent`, `QueryProviderAuditorRequest`, `NetworkData`, `Assets`, `QualifiedId`, `BoxProps`, `FrameBuffer`, `Shortcut`, `AngleSweep`, `ResourceTypes`, `SearchQueryBuilder`, `Styler`, `FooService`, `NodeWithId`, `TxType`, `NetworkProvider`, `V1APIService`, `Fn4`, `ModalsState`, `Events.pointerleave`, `WorkRequestResource`, `TransitionInstruction`, `PackageManagerType`, `DeleteModelCommandInput`, `TimelineHeaderWrapper`, `EcommerceItem`, `FindSelector`, `Necktie`, `NameOrCtorDef`, `ICoverageFile`, `IRemindersGetByContactState`, `RatePretty`, `React.ClipboardEvent`, `CategoryRendererItem`, `GitStatus`, `ExtensionItem`, `AureliaProgram`, `TestKeyring`, `SupportedService`, `TypeSignature`, `GetInstanceCommandInput`, `CallbackFunc`, `ChartSeries`, `WebrtcProvider`, `ERC721TokenDetailed`, `TradingPair`, `CentralSceneCCNotification`, `InstanceBlockDeviceMapping`, `RemoveEventListener`, `DataArray`, `JoinCandidateBuilder`, `StablePlace`, `IReportEmbedConfiguration`, `Proppy`, `Dirigibles`, `ByteSize`, `DebugProtocol.VariablesArguments`, `IBirthCompositionBody`, `ISharedFunction`, `PreventCheck`, `ConnectionInvitationMessage`, `ConfigService`, `ImmutableListing`, `AnyCoinCode`, `ProppyFactory`, `EthereumProvider`, `StackFrame`, `ConfigureOptions`, `BlockPath`, `AClass`, `cheerio.Root`, `MatOpN`, `GeneralState`, `TElementNode`, `CollectorOptions`, `WebGLActiveInfo`, `MakefileConfiguration`, `PickDeepObj`, `vscode.ConfigurationScope`, `InterfaceSymbol`, `GetDeliverabilityTestReportCommandInput`, `RlpSerializable`, `VisualizePluginStartDependencies`, `requests.CreateCertificateRequest`, `GfxProgram`, `IMilestone`, `TransferBatch`, `KeywordType`, `AnyRecord`, `TrainingZone`, `SidenavContextType`, `DataStore`, `GetGroupRequest`, `EarlyReturnType`, `redis.RedisClient`, `mb.IRecording`, `PropertyCategoryLabelFilterer`, `Duplex`, `LogStatement`, `DictionaryPlugin`, `RemoteFile`, `ConnectionWorkflow`, `PiScopeDef`, `DataSourceType`, `ChartJSService`, `SerializedPlayer`, `TransformedPoint`, `ImageHandler`, `XhrCompleteContext`, `MIMEType`, `LocationLink`, `WsConnection`, `Separate`, `CommandHandler`, `IGatewayRoom`, `TabStripItem`, `ProductTypeService`, `TreeElement`, `PianoNote`, `ISiteScript`, `IIterator`, `Agenda`, `IConnected`, `StkTruToken`, `GreetingWithErrorsCommandInput`, `MyOtherObject`, `FcUuidAuth`, `DropIndexNode`, `CandidateCriterionsRating`, `ts.InterfaceType`, `GeometryStreamProps`, `sdk.SpeechSynthesizer`, `IRectangle`, `M`, `DataContext`, `HTMLOptGroupElement`, `ComputedPropertyName`, `IConnector`, `ApplyChangeSetOptions`, `Auth0UserProfile`, `EventState`, `Box3Like`, `SinonSpy`, `TokenModel`, `ExpressionRendererEvent`, `FieldNameList`, `TestControllerPoint`, `RepositoryFile`, `MethodNode`, `TextMatchOptions`, `IOrderCreationArgs`, `AppWithCounterAction`, `AST.Regex`, `Strapi`, `FourSlashFile`, `ExploreState`, `BIP85Child`, `ListSchemaVersionsCommandInput`, `ArtifactSizes`, `MeshLambertMaterial`, `BigNum`, `PathResolverResult`, `Types.RouteCallback`, `LoginInfo`, `TransferValidatorStakeV1`, `Select`, `ColumnSeriesOptions`, `ChannelWrapper`, `ParseEvent`, `QueryBarTopRowProps`, `JSONSchemaType`, `KeyPairKeyObjectResult`, `NzI18nService`, `IDBPDatabase`, `MessageWriter`, `WsDialogService`, `RouteRecord`, `LocalStorageKeys`, `ElectronService`, `ListInputsCommandInput`, `FieldOptions`, `IndexedAccessType`, `StyleInfo`, `ExpRes`, `MessageResolvable`, `Curry`, `IStore`, `Prop`, `ProblemFileType`, `Equals`, `requests.ListWafBlockedRequestsRequest`, `Services.UIHelper`, `TypedBinOp`, `ClientCapabilities`, `IterableReadable`, `HexString`, `ApiProps`, `SExpressionRepl`, `EdgeCollider`, `Picture`, `Reject`, `ParentFiber`, `ENR`, `i64`, `IMapSettings`, `InputThemeConfig`, `AssemblyOption`, `Fn`, `MentionSuggestionsProps`, `ReactWrapper`, `XorShift`, `Rule.RuleContext`, `DalBoard`, `SendManyOptions`, `PageState`, `GitRemote`, `RawDraftContentState`, `InjectedMetadataSetup`, `egret.MovieClip`, `SbbIconOptions`, `ServiceUnavailableException`, `NodeWorkerMain`, `SocialError`, `CheckoutPaymentPage`, `ParentNode`, `Resume`, `FunctionFlags`, `Send`, `TextureProvider`, `RequesterBlockMap`, `GfxImplP_GL`, `LocalAccount`, `IMouseZone`, `HandlerMap`, `AssetKey`, `TemplateScope`, `Booking`, `ITdDataTableColumn`, `TestComponentProps`, `XAxisProps`, `OrganizationRecurringExpenseService`, `Parameter`, `SettingsStateType`, `STStyle`, `Primitives.Value`, `DBDriverResource`, `SchemaObjCxt`, `ICompiledFunctionCall`, `IPluginContext`, `MOCK_TYPE`, `RegExpCompat`, `DaffMagentoCartTransformer`, `MethodSignature`, `RouteDefinitionParams`, `SearchBoxProps`, `IHTTPRequest`, `IAction`, `IResultSetColumnKey`, `MapRewardNode`, `ExportedNamePath`, `IRemoteUser`, `LanguageDetectorAsyncModule`, `SKShadowItem`, `ServiceEntitlementRegistrationStatus`, `InitWindowProps`, `BinaryOpNode`, `UsersRepository`, `CompletrSettings`, `AsyncResultCallback`, `SfdxCliActionResultDetail`, `UiStateStorageStub`, `SortablePolygon`, `InitChunk`, `TwistyPlayer`, `BaseKey`, `SChildElement`, `GoogleMap`, `SubjectsBounds`, `IMergeTreeDeltaOpArgs`, `Variation`, `HttpMiddlewareEffect`, `Validator`, `IFormatterParserFn`, `ProcessOptions`, `SavedEncounter`, `MapViewFeature`, `TokenObject`, `InteractionService`, `Apollo`, `kifp_element`, `AggregationMap`, `ObjectDescriptor`, `SimpleObject`, `NotificationItem`, `RegistryKey`, `DatePrecision`, `EffectSystem`, `CreateBranchCommandInput`, `HdErc20Payments`, `IMusicInfo`, `ServerlessRecord`, `SignedCredential`, `Aabb3`, `ProjectStatus`, `PointerInfo`, `ProviderSettings`, `PingMessage`, `HTMLRewriter`, `SimpleRNNCellLayerArgs`, `GDQBreakBidManyOptionElement`, `RecordsGraph`, `TypeInferences`, `protos.google.iam.v1.ISetIamPolicyRequest`, `SearchPattern`, `RendererProps`, `ApiService`, `AuthContextData`, `DSVParsedArray`, `DescribeDBClusterSnapshotsCommandInput`, `HTMLScTooltipRowElement`, `TestConfigData`, `DidConfig`, `ContractKit`, `IModLoaderAPI`, `ILookUpArray`, `ParsedTypeDetailed`, `TxSummary`, `NewPackagePolicy`, `ContentLayoutProps`, `AngularExternalTemplate`, `IObjectInspector`, `ExpressionExecOptions`, `DoubleLinkKVStore`, `ts.ESMap`, `StyleResource`, `GroupBySpec`, `TargetElement`, `ActionSheetOptions`, `IterableProtocol`, `TestBadgeComponent`, `NpmPackageManager`, `V1Container`, `TypedNode`, `HasTaskState`, `UserConfig`, `CollectBBox`, `Decomposers`, `ItemSocket`, `IndyProof`, `ChannelPickerItemState`, `ProviderLibrary`, `MDCTabBarAdapter`, `Plugin_2`, `MessageTypeMapEntry`, `ModuleRpcServer.ServiceHandlerFor`, `ContainerRegistryEvent`, `TxGeneratingFunctionOptions`, `ConfigAccount`, `DeleteEventSubscriptionCommandInput`, `Inherits`, `UnicodeRangeTable`, `BasicGraphPattern`, `ResourceComputationType`, `SubmissionStatus`, `PureEffect`, `XPCOM.nsIComponentRegistrar`, `WebPhoneUserAgent`, `KeyframeNodeOwner`, `TriggerEvent`, `ValidateDeviceOwnershipQuery`, `ScrollIntoViewOptions`, `requests.ListPublishersRequest`, `SchemaValidator`, `UnresolvedLogs`, `HTMLWalkState`, `MinMaxSurroundAttestation`, `NodeJS.ReadableStream`, `DelonLocaleService`, `SubjectService`, `TSeed`, `NodeSelector`, `NewOrganizationDTO`, `RuleFixer`, `SyncServer`, `DAL.DEVICE_ID_BUTTON_RESET`, `Accessibility.PointComposition`, `IActorRdfDereferenceOutput`, `BroadcastTx`, `InterventionTipsStatuses.StatusIds`, `ArrayServiceTreeToArrOptions`, `Test2`, `QBFilterQuery`, `MeshLODLevel`, `BaseApplication`, `FinalEventData`, `AppEntry`, `FetchResolveOptions`, `ItemModel`, `ThumbnailModel`, `PIXI.Container`, `StreamingFeeState`, `ConfigurableStartEnd`, `MomentInterval`, `PutFeedbackCommandInput`, `NonReactive`, `TutorialDirectoryHeaderLinkComponent`, `GitHubRef`, `Bootstrap`, `EventAggregator`, `Shared`, `Config.DefaultOptions`, `TPDISearchParams`, `XmlMetadata`, `d.HostRuleHeader`, `DeleteIdentityProviderCommandInput`, `CountItem`, `ClipboardWatcher`, `ClientStateType`, `CreateAccountStatus`, `GunValue`, `MicrophoneConfig`, `Facsimile`, `BeatmapDifficulty`, `UpdateAuthorizerCommandInput`, `SourceFuncArgs`, `NgEssentialsOptions`, `ParquetCodec`, `fused.Activation`, `TemplateGroup`, `SimpleCharacter`, `CeramicApi`, `SelectorQuery`, `UseInfiniteQueryResult`, `BSTProxy`, `Pipe`, `IRunConfig`, `BaseInterface`, `ILeg`, `InterpolationConfig`, `ConfigEntity`, `CurrentProfile`, `Callout`, `GenesisCommit`, `ISkin`, `UserClaims`, `ISizes`, `Tape`, `MeterScale`, `CommandError`, `os.NetworkInterfaceInfo`, `TradeSearchHttpQuery`, `JwtPayload`, `TaggedTemplateLiteralInvocation`, `PageChangeEvent`, `VisiteRepartitionType`, `Genesis`, `T.NodeRef`, `d.HydrateResults`, `PDFString`, `region`, `StoredEvent`, `CSharpInterface`, `HTMLOptions`, `types.Position`, `GX.WrapMode`, `IPartyMember`, `IOpenRepositoryFromURLAction`, `IVariantCreateInput`, `VirtualFile`, `WaveformHD`, `IntervalNode`, `FSA`, `BlockchainPackageExplorerProvider`, `DeleteManyInput`, `Decl`, `PointGraphicsOptions`, `SVGAElement`, `ICompiledRules`, `SymOpts`, `AppsState`, `SendEmailOptions`, `HookHandlerDoneFunction`, `AbstractModel`, `ControllerType`, `runtime.HTTPHeaders`, `GithubAuthProvider`, `WalletContractService`, `SongState`, `estypes.SearchRequest`, `Vol`, `InMemoryUser`, `DescribePackageVersionCommandInput`, `MarkdownFile`, `Models.Side`, `Subject`, `Y`, `DomainEventMapping`, `SearchResultsAlbum`, `SecurityManager2`, `SqrlEntity`, `LobbyOverlayProps`, `NanoID`, `RotationType`, `ShareStoreMap`, `requests.ListInstanceConsoleConnectionsRequest`, `CapsuleColliderShape`, `UpdateSpellUsableEvent`, `TSESTree.MemberExpression`, `ClassLike`, `HeroService`, `ICollections`, `lf.Database`, `PublicSymbolMap`, `PrimaryTableCol`, `FormatOptions`, `MultiMult`, `ASModule`, `ElementHandle`, `TargetDetectorRecipeDetectorRule`, `RenderingContext2D`, `JAddOn`, `Thickness`, `Prando`, `MockContract`, `LocalOptions`, `TableType`, `GraphGroup`, `Geometry`, `android.app.Activity`, `ESLintClass`, `CSS`, `CompositeBrick`, `ReportFunnel`, `BrowserContextOptions`, `SanityTestNode`, `JSONPath`, `CompressedImage`, `AppealChallengeData`, `ListAppInstancesCommandInput`, `RstatementContext`, `ModelEvaluateArgs`, `ID3v2MajorVersion`, `ISolutionEntry`, `ServerModel`, `Linter`, `IntVoteInterfaceWrapper`, `ApplicationState`, `ACLType`, `T15`, `KeyShare`, `CoapServer`, `UsePaginatedQueryState`, `AtomDataHandler`, `MetricIndicator`, `ApplicationOptions`, `OptionNameMap`, `vscode.QuickPickOptions`, `IsSpeakingChangedListener`, `ITile`, `DType`, `TSPosition`, `IProjectCommand`, `IStszAtom`, `StructureLab`, `RowViewModel`, `View`, `MessageSerializer`, `FindingCriteria`, `CommonContext`, `Models.DiagnosticsSettings`, `Comparison`, `ScmResourceGroup`, `SubContext`, `ts2json.DocEntry`, `HsEndpoint`, `GlyphInfo`, `GetInvitationsCountCommandInput`, `Vorgangsposition`, `JRes`, `ConnectorType`, `LayoutType`, `Handlebars.HelperOptions`, `TimeFilterServiceDependencies`, `BaseNavTree`, `VimMode`, `DebugAction`, `StreamSelection`, `SpacesManager`, `HttpChannelWrapper`, `ExpressionFunctionClog`, `EmaSubscription`, `HttpLink`, `requests.ListJobRunsRequest`, `TileCoords3D`, `Ulong_numberContext`, `MagicRPCError`, `Transformable`, `FactoryProvider`, `PredicateNode`, `IButtonClickEvent`, `PushRPC`, `Augur`, `IPlatform`, `CssFile`, `KeyPairOptions`, `V1ClusterRole`, `PackageJsonInfo`, `KeyboardKeyWrapper`, `Build`, `requests.ListSecretsRequest`, `DataPublicPlugin`, `SignedTransaction`, `IdOrNull`, `FlattenedXmlMapWithXmlNamespaceCommandInput`, `DescribeAppCommandInput`, `SVGRectElement`, `requests.ListLocalPeeringGatewaysRequest`, `SubMiddlewareApi`, `OurOptions`, `FunctionDesc`, `GPUSampler`, `ReadonlyVec`, `MonitorState`, `ILineInfo`, `GetPolicyVersionCommandInput`, `LogicAppInfo`, `TransmartRelationConstraint`, `requests.ListNotebookSessionsRequest`, `DataModels.Correlations.ProcessInstance`, `AjaxAppenderConfiguration`, `SshSession`, `GitQuickPickItem`, `DataViewColumn`, `LineSelection`, `ErrorMessageTracker`, `DesktopCapturerSource`, `StoreActions`, `MockMessageClump`, `Constraint`, `AggregationCursor`, `ShellWindow`, `LITestService`, `UpdateEntrypoint`, `PDFPage`, `PubScript`, `DbAbstractionLayer`, `CollectionTypes`, `BaseAsset`, `MenuTree`, `requests.ListNodePoolsRequest`, `ITestRunnerOptions`, `MixinTable`, `LocaleData`, `InstancedBufferGeometry`, `GuardFunction`, `Listr`, `ImageEdits`, `ChunkList`, `FlexDirection`, `RollupBlock`, `requests.ListSteeringPoliciesRequest`, `GeneralName`, `TexGen`, `angu.Context`, `ClassDescriptor`, `AnyIterable`, `FlexboxLayout`, `IJSONSegment`, `GameDataStateRecord`, `ActionReducer`, `TNATxn`, `ComponentWithUse`, `MonitoringStats`, `d.BuildResultsComponentGraph`, `PropertyExt`, `DeprecationsFactory`, `Collector`, `TreemapNode`, `DataExtremesObject`, `PointOctant`, `SimpleRNNLayerArgs`, `AsyncPriorityQueue`, `TScope`, `RequestType`, `DaffOrderFactory`, `IScreenshot`, `AppMenu`, `ISourceOptions`, `React.Ref`, `NSNotification`, `PartyJoinRequest`, `BundleManager`, `CompilerError`, `JapaneseDate`, `JPAResourceData`, `enet.NetData`, `ReplyRequest`, `BinaryTargetsEnvValue`, `STDeclaration`, `MatchingLogic`, `SegmentGroup`, `LengthType`, `ProcessEnv`, `social.UserData`, `PrismaPromise`, `TLE.TleParseResult`, `ProjectOptions`, `TickerFuncItem`, `MediaSlot`, `_.Dictionary`, `ListIdentitiesCommandInput`, `VKFParamMap`, `HookTypes`, `StyleSheetList`, `EditorPackage`, `ClrDatagridStateInterface`, `RoutableTileWay`, `DockerContainerProps`, `TypeFeatures`, `SendCommandResult`, `DashboardPanelState`, `PopupModel`, `LwaServiceClient`, `GenericDefault`, `EndResult`, `DynamoDB.QueryInput`, `EntityActionOptions`, `Bundle`, `CreateChildSummarizerNodeParam`, `apiKeysObject`, `UILayoutGuide`, `GetMapParams`, `KeyboardShallowWrapper`, `TypeReconstituter`, `AppContextData`, `ConfigurationData`, `ListChannelsCommandOutput`, `ExpressionsServiceStart`, `IAccount`, `DiffLine`, `FieldAccessInfo`, `AudioVideoControllerState`, `vscode.DebugSession`, `d.OutputTarget`, `DatabaseSet`, `JSDocTagInfo`, `Discipline`, `Pool`, `TempDir`, `MpEvent`, `Tabs`, `SegmentedBar`, `SocketPoolItem`, `FormDefinition`, `ThermostatSetpointType`, `SampleView`, `SelectedScope`, `Rx.Observable`, `RtpHeader`, `kKeyCode`, `WebGPUTensor`, `ToastsManager`, `InitData`, `CoreConnector`, `DescribeDatasetCommandInput`, `Privacy`, `ColorData`, `SchemaConstructor`, `GroupProperties`, `Resolution`, `SignalingClientObserver`, `QuickCommand`, `Fauna.Expr`, `SimulatorState`, `stream.Writable`, `AureliaProjects`, `RuleStateData`, `HnCache`, `Events.postdraw`, `DocCollection`, `RouteDeps`, `IterationUI`, `CarouselItem`, `ChannelService`, `UpdateParticipantRequest`, `WType`, `RemoteAction`, `ArrayOption`, `MainModule`, `CipherCCM`, `TweenInput`, `StorageOptions`, `CreateUserCommandOutput`, `TemplateUnparser`, `SpriteSpin.Data`, `StateBlock`, `google.maps.MarkerOptions`, `ViewPortManager`, `BifrostProtocol`, `GraphStats`, `Vec`, `IRequestOptions`, `StylableModuleSchema`, `ClrFlowBarStep`, `CSharpMethod`, `ColumnsProps`, `BaseUnit`, `CollectionValue`, `RuleChild`, `VisibleTextLocator`, `L1L2Args`, `IInventoryItem`, `RequestType0`, `ProviderData`, `InviteActions`, `AbiItem`, `MatrixProvider`, `GetChildNodes`, `AiPrivateEndpointSummary`, `PluginKey`, `TextComponent`, `HTMLLineElement`, `HookFunction`, `SuiDropdownMenuItem`, `ConvLSTM2DCellArgs`, `StreamAddOutgoingParam`, `FuseConfigService`, `CircuitGroupCircuit`, `ZipsonWriter`, `Types.PresetFnArgs`, `PathTargetLink`, `XTableColumn`, `HtmlTagObject`, `CSVInput`, `CoreDeploy`, `NextApiReq`, `BoundAction`, `Folder`, `dia.Cell`, `MigrationMap`, `OutputMessage`, `AgentOptions`, `Koa`, `IntelliCenterConfigRequest`, `Stmt`, `ValueObject`, `CivilContextValue`, `PriceData`, `TestMochaAdapter`, `Touch`, `Highcharts.Options`, `ContractCall`, `CollectionViewer`, `B2`, `ParameterName`, `ILyric`, `NonExecutableStepCall`, `CreateAppRequest`, `AutoFeeLevels`, `SearchEnhancements`, `AuguryEvent`, `DaffGetCategoryResponse`, `ts.ImportClause`, `LContext`, `ITheme`, `RiskLevel`, `AddressBookConfig`, `KeyframeIconType`, `ResourceManager`, `TEvents`, `StructureContainer`, `FilterObject`, `DataAssetSummary`, `UpdateNetworkProfileCommandInput`, `PrivateEndpointDetails`, `DynamicTextStyle`, `CircleShape`, `TaskDto`, `NestedHooks`, `Fu`, `DescribeSnapshotsCommandInput`, `LogEntry`, `CustomCode`, `RefactorAction`, `CLIElement`, `TabStateReturn`, `DraggableData`, `Blok`, `EnumDeclaration`, `AnnotationsMap`, `VMenuData`, `InstanceDetails`, `BaseRecordConstructor`, `CompletionPrefix`, `IssuesCreateCommentParams`, `d.ComponentCompilerProperty`, `ShortcutsTypes`, `DebuggingMode`, `TableSchema`, `SeederCollection`, `PartyMatchmakerAdd`, `LogicalQueryPlan`, `MapViewApp`, `ManualClock`, `requests.ListBootVolumesRequest`, `BlocksModel`, `TaskObserversUnknown`, `TransformCssToEsmOutput`, `TEmbeddableInput`, `OperationType`, `OhbugClient`, `Models.QuotingParameters`, `BarProps`, `Feature`, `TreeSelectItem`, `TokenPayload`, `INodeInfo`, `SketchLayer`, `VChild`, `ViewportScrollPosition`, `requests.ListStreamPoolsRequest`, `IAward`, `TaskTypes`, `IterableX`, `DeleteEmailTemplateCommandInput`, `ResolveInfo`, `LanguageData`, `MutableVector2`, `StaticComponent`, `ISystemActions`, `BlobService`, `YallOptions`, `DictionaryService`, `FunctionCall`, `PerfectScrollbarConfigInterface`, `WebhookActionConnector`, `BottomSheetNavigationState`, `pxtc.ApisInfo`, `TransactionAuthFieldContents`, `ISampler`, `PackageDetails`, `BaseEvent`, `ToolbarUsage`, `LayoutBase`, `BaseAtom`, `ex.Input.PointerEvent`, `DocumentExtra`, `MediaTrackConstraints`, `TableColumn`, `TimePrecision`, `TemplatePieces`, `QueryDocumentSnapshot`, `MsgDepositDeployment`, `CentersService`, `IdentityProviderSelectionPage`, `ICellEditorParams`, `DomSource`, `SourceMaps`, `ArgValue`, `ContextInterface`, `TestClient`, `PlayerListPlayer`, `RotationallySymmetricShape`, `StoredConfiguration`, `CmsConfig`, `requests.DeleteWorkRequestRequest`, `ILoggerService`, `MDCTabAdapter`, `HapiServer`, `RenameParams`, `ISvgMapIconConsumerProps`, `IFormProps`, `StateVisNode`, `ReuseContextCloseEvent`, `AnimationFactory`, `MatHint`, `ResManager`, `ParseExpressionTextResults`, `CharCategoryMap`, `ActiveLabel`, `VisualizationsStartDeps`, `GunGraph`, `VariableValue`, `ExpectedDiagnostics`, `GX.DiffuseFunction`, `requests.ListVmClusterUpdatesRequest`, `NamedType`, `gPartial`, `SelectionConstructorArgs`, `AnimationKeyframeHermite`, `ClassDefinition`, `IObjectType`, `TimeZone`, `DeleteFlowCommandInput`, `MapDispatchToPropsFunction`, `Outbound`, `OptionMessage`, `React.ReactNode`, `HsdsCollection`, `android.net.Uri`, `DoStatement`, `NoticeItem`, `DisplayObjectWithCulling`, `CreateFolderCommandInput`, `GeneratedKey`, `WcCustomAction`, `ASTNode`, `usize`, `SelectOptionValue`, `VisOptionsProps`, `HeroSearchService`, `InteractiveProps`, `GtConfigField`, `DescribeValidDBInstanceModificationsCommandInput`, `TextRange`, `ProtractorBrowser`, `ArrayContent`, `E2EPage`, `RendererMock`, `QuestionMatrixDynamicModel`, `DeployProviders`, `btTransform`, `RoomMember`, `RouteTable`, `RuleData`, `d3.HierarchyPointNode`, `FD_Entity`, `ScaleByFactor`, `Profile`, `RadioButton`, `ActionTypeExecutorOptions`, `PlanetApplicationRefFaker`, `IAtDirectiveData`, `NodeStat`, `requests.ListDrgRouteDistributionsRequest`, `SeriesPoint`, `MessageDocument`, `WalkState`, `AssignmentDeclarationKind`, `AccountIdRequestMessage`, `ButtonHTMLProps`, `OrganizationTeamEmployee`, `CharWhere`, `StandardTokenMock`, `PageService`, `AttributifyOptions`, `LoginUser`, `TComAndDir`, `BrowserSession`, `AssignAction`, `LayoutStateModel`, `CommandCreatorResult`, `SMap`, `ActivityInfoModel`, `IKernel`, `SwaggerOptions`, `MissionSetupObjectSpawn`, `StopMeetingTranscriptionCommandInput`, `serviceRequests.GetWorkRequestRequest`, `NumberListRange`, `android.graphics.drawable.BitmapDrawable`, `TSTypeReference`, `TBSelection`, `d.TranspileModuleResults`, `HTMLHeadingElement`, `IncomingHttpResponse`, `BEMData`, `RadioItem`, `FlipperLib`, `ApiReturn`, `PDFObjectStream`, `INewProps`, `EndpointConfiguration`, `BottomNavigation`, `CancellationReason`, `ErrorMessage`, `Leg`, `PayloadType`, `Happening`, `MetaStaticLoader`, `OpenSearchUtilsPlugin`, `NuxtContext`, `GX.IndTexFormat`, `TActions`, `AnalysisCache`, `Helpers`, `ThExpr`, `ConnectFailedListener`, `GettersFor`, `OwnedUpgradeabilityProxyInstance`, `Collectable`, `TextField`, `CreateBundleDTO`, `FontAwesomeIconStandalone`, `FireLoopRef`, `CameraComponent`, `DAL.DEVICE_ID_MULTIBUTTON_ATTACH`, `FontMetricsObject`, `PromiseExecutor`, `LocalForage`, `WebController`, `Coll`, `ContainerInfo`, `HostedZone`, `Entitlement`, `IMetricContext`, `semver.SemVer`, `ClipEdge`, `ListConfigurationRevisionsCommandInput`, `SearchInWorkspaceResultLineNode`, `GetIamPolicyRequest`, `InternalParser`, `SigningWallet`, `ICountry`, `RawVueFileName`, `MenuPositionX`, `ProblemTagEntity`, `ActionMetadataArgs`, `AbstractAssets`, `ServerAccessKey`, `ValueQuery`, `MutableTreeModelNode`, `PatcherServer`, `RollupWatcherEvent`, `HttpPipelineLogLevel`, `DeclarationName`, `ResolveRequest`, `GalleryImageVersion`, `DebugGeometry`, `Keyframe`, `ContractAPI`, `LinearOptions`, `ThemeNeutralColors`, `IImageBuilder`, `AccountV10`, `ListTagsForResourceCommand`, `CLValue`, `PinchGestureEventData`, `UnionBuilder`, `AggregateCommit`, `WalkStats`, `BTI`, `NodeJS.Dict`, `StateChannelsJsonRpcMessage`, `IObserver`, `InteractiveController`, `IContentSearchFilter`, `SVGDefsElement`, `IFluidHandleContext`, `TopAggregateParamEditorProps`, `t_08f7c2ac`, `LoaderFactory`, `RecommendationSummary`, `ListenerFn`, `VNodeElement`, `InnerJoin`, `OptionalResources`, `MessageInstance`, `PaymentMethodCreateParams.BillingDetails`, `PathFn`, `CreateModelCommandInput`, `GoldenLayout.ItemConfig`, `TrackedSet`, `ICaptainDefinition`, `SingletonDeployment`, `JobDatabase`, `WrappedDocument`, `ProbabilitySemiringMapping`, `AzureBlobStorage`, `grpc.Metadata`, `DbSystemEndpoint`, `User1524199022084`, `JSMs.Services`, `BooleanType`, `OutputFile`, `ConverseContext`, `InvoiceItem`, `SelectorGroup`, `DescribeDomainsCommandInput`, `CdkHeaderRowDef`, `PendingRequest`, `ExtensiblePayload`, `FolderData`, `Gamepad`, `Iter`, `StyledComponentClass`, `IChangeRequestManagementItem`, `DirectionsType`, `MethodDeclaration`, `Dialogic.MaybeItem`, `RouterHistory`, `NameValue`, `ScreenInfo`, `CreateProcedureWithInputOutputParser`, `LambdaIntegration`, `LineNode`, `MXDartClass`, `CallReceiverMock`, `IOmnisharpTextEditor`, `SrvRecord`, `InputNode`, `TestView`, `ElectronEvent`, `IAssetState`, `ArcTransactionProposalResult`, `IEventType`, `TracksState`, `APIConfigurationParameters`, `IterableOrArrayLike`, `UploadFile`, `FaunaCollectionOptions`, `FirmwareWriterProgressListener`, `CFDocsDefinitionInfo`, `IThemeRegistration`, `R.Chain`, `ITransaction`, `NSDictionary`, `WorkerResponse`, `StackOutput`, `Exprs`, `Ui`, `Apollo.LazyQueryHookOptions`, `ConditionResolver`, `GetStateReturn`, `BasicJewishDate`, `FetchOptions`, `ISetting`, `DidChangeWatchedFilesParams`, `MaybeFuture`, `ProviderLike`, `PerimeterEdge`, `MessageFormatterOptions`, `Subscriber`, `TestingModule`, `DiagnosticWithLocation`, `RendererEvent`, `ItemPositionCacheEntry`, `SystemVerilogIndexer`, `AuthenticationPolicy`, `ITokens`, `Kwargs`, `ColumnScope`, `MissingTranslationHandlerParams`, `DAOMigrationParams`, `DeletePackageCommandInput`, `LayouterService`, `ast.MacroCallNode`, `LogSummary`, `polymer.Element`, `FormRenderer`, `SampleUtterances`, `UnitRuntimeContext`, `AlertUtils`, `Params$Create`, `EdmxFunction`, `yargs.Argv`, `PrerenderHydrateOptions`, `CommandBuilder`, `TextElement`, `d.TransformCssToEsmOutput`, `ClientRequestFailedEventArgs`, `PaginateOptions`, `Synth`, `angular.IHttpService`, `ProxyPropertyKey`, `OmniOscillator`, `TGetStaticProps`, `MDL0_MaterialEntry`, `PotentialLemma`, `NodeArray`, `CoinTransferMap`, `EmbeddableRendererProps`, `WebpackTestBundle`, `WorkflowDto`, `PublicCryptoKey`, `JCorner`, `SimpleStore`, `RawResponseCallback`, `BastionShareableLinkListRequest`, `IShikiTheme`, `UploadedFile`, `UI5Aggregation`, `SceneActivationCCSet`, `BaseSyntheticEvent`, `OsuBuffer`, `_SelectExplanation`, `TabbableHTMLProps`, `ListSubscriptionsResponse`, `CollisionEndEvent`, `WarframeData`, `UIBrewHelper`, `O.Compulsory`, `NPCActor`, `IDejaGridColumn`, `TextSegment`, `DenomHelper`, `ClockRotate`, `VariableMap`, `GitScmProvider`, `TreeIterator`, `RuntimeShape`, `SavedObjectsExportError`, `Endorser`, `Records`, `SignalOptions`, `DeletePublicAccessBlockCommandInput`, `BgState`, `NgxPermissionsService`, `Atom.Point`, `DecodeError`, `IAddGroupUsersResult`, `TypeData`, `BaseCollider`, `ParsedCssDocument`, `AccountOperation`, `requests.ListOnPremConnectorsRequest`, `CellInterface`, `CanvasItem`, `ISet`, `MessageEmbeddedImage`, `OptionalWNodeFactory`, `CloudService`, `ReadRepository`, `i`, `UpdateHostClassService`, `TestDriver`, `PreviewVer`, `SignatureHelpItem`, `IMergeTreeOp`, `GoToOptions`, `KibanaResponseFactory`, `SimpleRNNCell`, `apid.Rule`, `NgxDropzoneService`, `WsTitleService`, `StorageDriver`, `QuaternionKeyframe`, `Comparable`, `L.LatLngExpression`, `IProfileMetaData`, `Pooling2DLayerArgs`, `GetIntegrationCommandInput`, `OverridePreferenceName`, `RepositionScrollStrategy`, `Timer`, `INotificationOptions`, `ChartsPluginSetup`, `ScenarioResult`, `ShardFailureOpenModalButtonProps`, `ForgotPasswordEntity`, `UiCounterMetricType`, `FramePin`, `ContextContainer`, `KubectlContext`, `React.DependencyList`, `ViewportOptions`, `IMatchOptions`, `CheckboxGroupProps`, `SessionLogoutRequest`, `DeploymentParameters`, `VoiceFocusDeviceOptions`, `PendingMaintenanceAction`, `ThemeServiceStart`, `TimeInfo`, `LintReport`, `SearchParamAsset`, `ServiceError`, `PaperProfile`, `Linkman`, `GaussianNoiseArgs`, `CssSelector`, `Lookup`, `CumsumAttrs`, `ArmObj`, `TUserBaseEntity`, `InternalHttpServiceStart`, `DaemonSet`, `DocumentAccessList`, `HmrStyleUpdate`, `UpdateSource`, `MsgPauseGroup`, `QCNode`, `TBuilder`, `AbiStateUpdate`, `ForInStatement`, `AlfredConfigWithUnresolvedTasks`, `IntervalCollectionIterator`, `firebase.firestore.DocumentReference`, `ResolvedCoreOptions`, `CommandLineToolModel`, `ApexDebugStackFrameInfo`, `RowArray`, `PartialObserver`, `DataPublicPluginEnhancements`, `WExpression`, `CogJob`, `ILoaderPlugin`, `PendingFileType`, `GBDeployer`, `Kernel.IFuture`, `RangeInterface`, `CheerioAPI`, `UpdateSubscriptionsRequest`, `WasmTensor`, `OriginationOp`, `INEO`, `Updater`, `ReLULayerArgs`, `Clef`, `vscode.Task`, `So`, `PrivateEndpointConnectionsDeleteOptionalParams`, `TransmartExportJob`, `JsonDocsStyle`, `TabsService`, `MIRResolvedTypeKey`, `LockerService`, `NAVTableField`, `EventAction`, `ModifyDBClusterEndpointCommandInput`, `admin.app.App`, `DeploymentDocument`, `InsertQuery`, `SharedString`, `DotDotDotToken`, `MyAccountPage`, `Demand`, `Testability`, `ItemBuffer`, `Rx.AjaxRequest`, `YamlCodeActions`, `VAStepWord`, `IVFSMount`, `DeploymentCenterStateManager`, `CodeLensBuffer`, `DeserializedType`, `FocusEventInit`, `i18n.TagPlaceholder`, `Spreadsheet`, `ListCertificatesCommandInput`, `Indicator`, `ts.CustomTransformers`, `RenderErrorHandlerFnType`, `ColorDataObj`, `IGroup`, `FlowNode`, `PageBlock`, `AveragePooling2D`, `ResponsiveMode`, `QuantityFormatter`, `InputStream`, `PipetteOffsetCalibration`, `FunctionCallArgumentCollectionStub`, `requests.ListMountTargetsRequest`, `UserUI`, `TagList`, `V1PersistentVolumeClaim`, `PlasmicContext`, `Seek`, `BaseClientOptions`, `Requestor`, `CreateRulesSchema`, `DeleteRoomResponse`, `CheckboxChangeEvent`, `PhraseFilter`, `LineInfo`, `TableRowProps`, `AuthenticationHelper`, `ResolverBuilder`, `IColumns`, `Processor`, `IPointData`, `QuestionFormatter`, `WebTreeMapNode`, `Redis.ClusterOptions`, `ResponsiveFacade`, `ClassTypeResult`, `HttpEventType`, `ZRImage`, `DomRecorder`, `MnemonicVariationsX86`, `IDynamicGrammarGroup`, `AnimationTriggerMetadata`, `HDKey`, `PDFOperatorArg`, `MerchantOrderGoodsEntity`, `OutcomeShortHand`, `m.Component`, `IncomingStateType`, `AndroidMessagingStyle`, `UsePaginatedQueryData`, `Events.entertrigger`, `LegendProps`, `IZosFilesOptions`, `DeleteApplicationCommand`, `tsc.TypeChecker`, `CredentialRequestOptions`, `CardRenderEffect`, `PluginsService`, `PropSchema`, `SavedObjectsUpdateResponse`, `SidebarLinkProps`, `SingleOrBatchRequest`, `OrderState`, `LedgerState`, `ReactIntl.InjectedIntl`, `SeekOutput`, `SignatureFlags`, `OpenApiSchema`, `ZipFile`, `InsertOneWriteOpResult`, `DeploymentGroupConfig`, `ServiceWorkerState`, `AnimationBase`, `LambdaNode`, `EffectScope`, `ChromiumBrowserContext`, `HitEvent`, `Camera_t`, `ConfigurationItem`, `Vue.CreateElement`, `FsUtil`, `ShouldShow`, `RequiredParserServices`, `DeleteFriendsRequest`, `SearchKey`, `ChangedEvent`, `TabBarProps`, `STDataSourceOptions`, `ObjectInstance`, `ClickOptions`, `KeyedReplacementMap`, `HelperService`, `RepositoriesStatisticsState`, `ConversationTimeline`, `GetStorageSuccessCallbackResult`, `SAPNode`, `AuthKey`, `PhysicsStatistics`, `VirtualMachine`, `IUserRepo`, `SheetObject`, `ToTypeNode.Context`, `Traversable3`, `TraceIdentifier`, `translateMapType`, `UITapGestureRecognizer`, `SwitchLayerAction`, `IEditorAction`, `Kernel.IKernel`, `PhysXPhysicsMaterial`, `DataToExport`, `Publications`, `SendCommandRequest`, `SavedObjectsImportResponse`, `RelativeTime`, `IThriftRequest`, `GXShapeHelperGfx`, `KeymapItemEditableProps`, `BaseItemState`, `ValidEndpointType`, `Clip`, `Stone`, `IErrorState`, `CompoundSelector`, `KeybindingScope`, `RenderCamera`, `KeyBindings`, `JQueryXHR`, `Exponent`, `OptionEditorComponent`, `hardforkOptions`, `NewLineFile`, `Azure.TableBatch`, `TAccount`, `d.ComponentCompilerTypeReferences`, `GenericTreeItem`, `OptionsDevice`, `ParallelPlot`, `UnarynotaddsubContext`, `DropDownOption`, `ImperativeBase`, `HexDocument`, `BFT`, `NullAndEmptyHeadersClientCommandInput`, `FilterList`, `PDFPageLeaf`, `EarlyStopping`, `Timeslot`, `DispatcherLocals`, `XQuery`, `EmbeddableActionStorage`, `ToolbarItemProps`, `Axis3D`, `pb.Callback`, `DAL.DEVICE_ID_SCHEDULER`, `MatcherHintOptions`, `Ledger`, `Space2DSW`, `MTDTexture`, `Voting`, `IRoutes`, `HandlerInboundMessage`, `InstrumentedStorageTokenFetcher`, `ThunkActionT`, `C`, `MzInjectionService`, `JSType`, `DrawType`, `DebtPareto`, `SubmodelImage`, `ExcaliburGraphicsContextWebGL`, `DiagnosticRule`, `DecodedSourceMap`, `HsDialogContainerService`, `OptimizerConfig`, `MdcSnackbar`, `MDL0ModelInstance`, `GetMyProfileCommand`, `S1Node`, `CourseName`, `CreateAssetDTO`, `GetStudioCommandInput`, `MemoryPartition`, `ImportSavedObjectsOptions`, `MediaQueryData`, `VirtualInfo`, `ethers.Signer`, `ApplicationCommandOptionChoice`, `Debe`, `ActionBase`, `DataModels.Correlations.Correlation`, `UpdateActionDef`, `Indy.LedgerRequest`, `OrganizationContactService`, `RouteFactory`, `UpSetSelection`, `CallNode`, `LocaleProviderService`, `_IDb`, `setting`, `ButtonManager`, `CompactProtocol`, `MessageChannel`, `ENUM.AfflictionType`, `SymbolTracker`, `ReaderStateParserLike`, `ReadBuffer`, `NzSafeAny`, `DependencyWheelPoint`, `TargetTypesMap`, `SenderFunction`, `QueryStringInputProps`, `RedspotContext`, `AllInputs`, `requests.ListCostTrackingTagsRequest`, `PluginModel`, `MerchantIdentity`, `ImmutableNotebook`, `IqResponseStanza`, `SpecialPropertyAssignmentKind`, `TriggerType.GITHUB`, `ResetDBParameterGroupCommandInput`, `OverlayProps`, `ImageCache`, `InMemoryDriver`, `ApisInfo`, `BenefitMeasurement`, `requests.ListIamWorkRequestLogsRequest`, `ConnectedComponent`, `GfxMegaStateDescriptor`, `SubmissionEntity`, `IDiagramState`, `RegisterDeprecationsConfig`, `CommandName`, `StartDBClusterCommandInput`, `TextureParameterEnum`, `AdapterUser`, `RedisClient`, `Error_ContextEntry`, `PostRoles`, `OperationVariant`, `GetPolicyCommandInput`, `GridElement`, `HTTPBuffer`, `SdkProvider`, `RGBStrings`, `MeshVertice`, `SalesInvoiceModel`, `requests.ListErrataRequest`, `PlaneAngle`, `apid.ReserveId`, `DeployedCodePackageCollection`, `ServerHost`, `Twitter`, `ExampleDefinition`, `IChannelFactory`, `IntrospectionInputTypeRef`, `Booru`, `FieldFormatsRegistry`, `ExtraComment`, `DescribeComponentCommandInput`, `ImplDeployment`, `BIP32Path`, `order`, `ClientsService`, `IItemScore`, `AnalyticUnit`, `Angulartics2Mixpanel`, `vscode.WorkspaceEdit`, `JellyfishWallet`, `Matrix44`, `DkrTexture`, `Cropping2DLayerArgs`, `IntersectionObserverInit`, `StreamDeckWeb`, `IAssignmentUnitModel`, `TPackageJson`, `Electron.MenuItemConstructorOptions`, `Controller$`, `IpcMessageEvent`, `DayProps`, `IModuleStore`, `CAShapeLayer`, `BlockIndex`, `MeasureFormatter`, `ListWorkRequestLogsRequest`, `TSTNode`, `TradeFetchAnalyzeResult`, `IBabylonFileNode`, `TranslationService`, `LeaveGroupRequest`, `GridAxis`, `TimesheetService`, `ConfigurationTarget`, `IQueuedMessage`, `ActiveComponent`, `com.mapbox.pb.Tile.ILayer`, `OidcRegisteredService`, `GatewayToConceptRequest`, `TickItem`, `ContentManager`, `TTableOperand`, `Animation`, `WebFontMeta`, `CompilerSystemCreateDirectoryOptions`, `SelectionChangeEventArgs`, `Koa.Context`, `PositionWithCaret`, `LuaType`, `BrowseService`, `SendTxBody`, `NavigationContext`, `X`, `IShadingContext`, `DataModels.TokenHistory.TokenHistoryGroup`, `RippleCreateTransactionOptions`, `Appearance`, `KeyList`, `LabelSet`, `KeycloakAdminClient`, `E2EProcessEnv`, `Bond`, `GraphQLFieldConfigArgumentMap`, `EntityValidator`, `BazelWorkspaceInfo`, `SavedDashboardPanel730ToLatest`, `Animated.SharedValue`, `MatButtonToggle`, `IVSCServerManagerEventsHandler`, `ProposalResponse`, `ICfnSubExpression`, `IDataSource`, `ColumnFilters`, `Receiver`, `DefaultRequestSigner`, `DataNode`, `DotnetInsightsGcDocument`, `Vector4_`, `ExpressRoutePort`, `InitiatingWindowProps`, `MatchmakerMatched_MatchmakerUser_StringPropertiesEntry`, `IValidatedEvent`, `IJavaProjectWizardContext`, `Bsp`, `TileData`, `FixedPointX64`, `ReaderTask`, `UsernamePassword`, `ContinuousDomain`, `PIXI.DisplayObject`, `C9`, `TaskStatus`, `ESLSelectItem`, `ThyDropPosition`, `DeviceRegistryService`, `ZWaveErrorCodes`, `t.NodePath`, `CacheInfo`, `ToggleDeselectSeriesAction`, `ChaincodeStub`, `Object`, `Price`, `KintoRequest`, `ListInstancesRequest`, `requests.ListVolumeBackupsRequest`, `DeployStepID`, `ISetActionTypes`, `IItemBase`, `RenameEntityEvent`, `DataTypeDefinition`, `TSTypeElement`, `core.CommonInputFieldConfig`, `PQLS.Analysis`, `SerializedTemplateInfo`, `t.Statement`, `ValueDescriptor`, `InitiateLayerUploadCommandInput`, `TestCursorQuery`, `DeepPartial`, `TrustedSc`, `ObjWrapper`, `FakeData`, `FrameworkInfo`, `LineComment`, `NullLiteralExpr`, `Matrix3x2`, `t.Visitor`, `ISmartContract`, `ConnectionMode`, `SingleProof`, `MatchedContext`, `Calc`, `IdentifyEventType`, `Documentable`, `LayerType`, `SavedQueryAttributes`, `AuditConfig`, `_ISchema`, `tf.io.WeightsManifestConfig`, `ts.TypeLiteralNode`, `vue.ComponentOptions`, `DescribeParameterGroupsCommandInput`, `ApplicationRef`, `requests.ListDataGuardAssociationsRequest`, `TSConfig`, `PhotoService`, `JsonPatchOperationsState`, `FSWatcher`, `TLPointerInfo`, `PackageTarget`, `IFieldsAndMethods`, `AnimationFrame`, `BuildVideoGetQueryOptions`, `TokenBalance`, `ProductVariantSettingService`, `BreadcrumbsListProps`, `BigQueryRetrievalResult`, `CommentTag`, `AggregateRewriteData`, `CachedQuery`, `trm.ToolRunner`, `StickerOptions`, `RelationshipType`, `TAccumulate`, `GetGroupCommandInput`, `ReactiveInteraction`, `CurrencySymbolWidthType`, `IServiceManager`, `Flair`, `RnM2Accessor`, `RemoteEvent`, `PixelImage`, `MetricState`, `DataChannel`, `IOdspTokenManagerCacheKey`, `TablePaginationConfig`, `Bunjil`, `EaseItem`, `EncryptedDataKey`, `SolutionSet`, `K.FlowTypeKind`, `CssPropertyOptions`, `app.LoggerService`, `FixedPointNumber`, `Docker`, `EbsMetricChange`, `InstallStatus`, `TransactionType`, `Model.Option`, `LineSegment3d`, `KeyPairTronPaymentsConfig`, `IndexSpecification`, `EdgeMaterialParameters`, `FloatSym`, `DraftBlockType`, `CreateAlbumeDto`, `SerializedRenderResult`, `StreamReturn`, `argparse.ArgumentParser`, `ListrRendererValue`, `QueryGraph`, `WorldBoundingBox`, `TradeStrategy`, `IDiffObject`, `SelectEffect`, `EmbeddableOptions`, `AveragePooling3D`, `OrganizationService`, `ContentMatcher`, `CellOptionType`, `AppHookService`, `AnyArenaNode`, `SIDE`, `EngineArgs.EvaluateDataLossInput`, `LexerActionExecutor`, `FlexParentProps`, `LeftHandSideExpression`, `Capture`, `MDCSwitchAdapter`, `requests.ListApmDomainsRequest`, `ASSymbol`, `WCLFight`, `MutationTree`, `MDCCheckboxAdapter`, `DeleteTableCommandInput`, `CreateOneInputType`, `DebugProtocol.ContinueResponse`, `PostProcessor`, `TAbstractControl`, `BasicSeriesSpec`, `Usage`, `WithSubGeneric`, `BehaviorHook`, `MappedCode`, `OnLoadArgs`, `AzureAccount`, `IObjectHash`, `ProofCommandResponse`, `StackGroup`, `FormPayload`, `ColumnSetting`, `Prioritized`, `IMessageItem`, `FiltersState`, `SyncExpectationResult`, `InitMessage`, `HttpsCallableResult`, `Double`, `UserLecture`, `TradeSearchRequest`, `MockedResponse`, `DefItem`, `jdspec.ServiceSpec`, `ControlPanelConfig`, `StateInfo`, `RTCDataChannel`, `TStore`, `HairProps`, `TransformFactory`, `COURSE_TYPE`, `G`, `IHookStateSetAction`, `IZoweTreeNode`, `Candidate`, `ICommandArgs`, `TagResourceCommandOutput`, `DynamicColorProperty`, `GeniePlugin`, `ParsedBlock`, `GithubIssueItem`, `BlockNumberState`, `BenefitService`, `StaticConnectionType`, `BinaryTreeNode`, `AskQuestionsParams`, `UnionOfConvexClipPlaneSets`, `FileSystemStats`, `IterationDirection`, `ErrorInfo`, `ISectionProps`, `DecryptResultPmcrypto`, `TupleType`, `ChannelSummary`, `StoryFile`, `IVarSize`, `InstanceStatus`, `BasicCCSet`, `RepositoryIssue`, `IAuthResponse`, `ESTestIndexTool`, `GenericLogger`, `SystemFixture`, `OrganizationsService`, `IExtent`, `Importance`, `DebugProtocol.ScopesArguments`, `DiagramEngine`, `IPathResultItem`, `Base58CheckResult`, `BINModelSectorData`, `ClassBuffer`, `Showtime`, `Carrier`, `DraftDecoratorType`, `IListItemAttrs`, `GeoLatLng`, `RequestTracingConfig`, `IdentifierInput`, `ForeignAttributeSelector`, `BasicUnit`, `WebContext`, `IChallengeProps`, `CommentGraphicsItem`, `SignatureHelpResults`, `MdDialogRef`, `InfoWindow`, `AddressInformation`, `ReminderFormatConfig`, `handlerFunc`, `ResourceTimelineGridWrapper`, `RtpTrack`, `FixResult`, `PCode`, `TimeRequestOptionsSourcesTargets`, `SimpleAttribute`, `ICreateVsamOptions`, `ReportService`, `GfxDevice`, `SideBarItem`, `BlockingResponse`, `FileSource`, `TwComponent`, `UsageCounter`, `OverlayPortal`, `cdk.CustomResource`, `VirtualMachineRunCommand`, `META`, `MarkerSnap`, `TransactionOverrides`, `Toplevel`, `MaybeElementRef`, `IPythonVenvWizardContext`, `AuthenticationConfiguration`, `RegionTagLocation`, `EntityAction`, `ChildRule`, `CertificateSubjectAlternativeName`, `CanvasTextBaseline`, `MockSegmentStore`, `TypeMoq.IMock`, `CredentialStore`, `MenuContextProps`, `WorkflowType`, `HTMLDivElement`, `NodeTree`, `SuspenseContextType`, `RoutesManifest`, `QR`, `CoinPretty`, `HashedFolderAndFileType`, `AnyType`, `LoginSuccessPayload`, `ModifyPoint`, `FauxClassGenerator`, `ParsedTemplate`, `Transforms`, `Enum`, `ListIntegrationInstancesRequest`, `P7`, `LanguageType`, `SortingService`, `Jest26CacheKeyOptions`, `ResponsiveInfo`, `InstallTypingHost`, `ThemeReducer`, `ZoneDefinitionModel`, `MessageReaction`, `PathData`, `TrueSkill`, `APIPost`, `CheckFn`, `requests.ListReplicationPoliciesRequest`, `BrowserFiles`, `I18N`, `HexMesh`, `ISearchResult`, `ConfigurationModel`, `IAllBondData`, `CumSumProgram`, `requests.ListBootVolumeReplicasRequest`, `IGetToken`, `DOMEventName`, `RecordProxy`, `CliArgs`, `HighlightService`, `CoreEnvironment`, `BScroll`, `Uint32List`, `DecimalFormatOptions`, `ModalWrapperProps`, `UserForm`, `ActivationFunction`, `StructuredAssignment`, `ContactMock`, `ChartErrorEvent`, `KnownAction`, `ServerConfig`, `MediaModel`, `ReactTestRendererTree`, `RangeResult`, `PositionOffset`, `ICategoricalFilter`, `AuthzService`, `ResponseGenerator`, `ResourceUnavailableException`, `DaffCategoryFilterToggleRequest`, `SecurityHealth`, `AxisOptions`, `StateT`, `ContextMenuInteraction`, `ListResourceTypesRequest`, `HsShareUrlService`, `Err`, `SinonMatcher`, `ICfnBinding`, `Types.GenerateOptions`, `PLAYER`, `GameplayClock`, `IHealthStateChunk`, `JointTreeNode`, `ShortcutID`, `TQuestionFull`, `Node.MethodParams`, `ErrorContext`, `QuirrelClient`, `AssignNode`, `Color.RGBA`, `Dexie.Table`, `NVM3Object`, `ReserveInstance`, `TrimmerTheme`, `I`, `IRoute`, `ProductContentPipe`, `CandidateStore`, `RemoteHotspot`, `TableServiceClient`, `LogAnalyticsParserField`, `JsxElement`, `ProxyOptions`, `AnonymousType`, `requests.ListCloudExadataInfrastructuresRequest`, `RenderTreeDiff`, `TensorArray`, `TypeTable`, `ActionRuntimeContext`, `CardScript`, `UpdateUserDto`, `TestSetupBuilder`, `GetRouteCommandInput`, `SdkPingPongFrame`, `TableAccessFullStep`, `StopExecution`, `AssetItem`, `DependencyIdentifier`, `SyncTable`, `DOMMatrix`, `PositionObjOrNot`, `IRecordedApiModel`, `Eula`, `PopoverController`, `ChromeApi`, `FileReflection`, `TimelineStep`, `OptionsState`, `WorkspaceFolderConfig`, `BotFrameworkAdapter`, `Intl.DateTimeFormatOptions`, `CalendarDay`, `EnvironmentInfo`, `RecoilValueReadOnly`, `ToComponent`, `GeneratorPaths`, `BoneAnimator`, `XTreeNode`, `ContextService`, `SCN0`, `NestedStagePanelsManager`, `BooleanInput`, `RequestWithUser`, `AnimatedSettings`, `S5PL2Layer`, `SavedObjectLoader`, `Storable`, `SimpleManipulator`, `ViewRef`, `Space`, `Kinds`, `SerialFormat`, `NodeEncryptionMaterial`, `MergeOptions`, `IDataSet`, `FontCatalogConfig`, `PagesService`, `RecordingSegment`, `CameraRigControls`, `vscode.DiagnosticSeverity`, `ConvertState`, `GetDomainRecordsResponse`, `WeatherService`, `IFontManager`, `JasmineTestEnv`, `TextDocumentSettings`, `BigNumber.BigNumber`, `MatchExp`, `ShapeBase`, `ListSourcesRequest`, `WebAppConfigStack`, `AssetBalance`, `ICommandBarProps`, `NormalizedCacheObject`, `SortingOrder`, `TreemapSeries.NodeObject`, `DevServer`, `TargetDatabaseTypes`, `TextureCubeFace`, `PatternStringProperty`, `PasswordGenerationService`, `CameraStrategy`, `Pow`, `BoundCurves`, `Prompt`, `Selective`, `PickerDelegate`, `ChartDef`, `ScullyContentService`, `DependencyPair`, `Function`, `Shift.Expression`, `IOpenSearchDashboardsSearchResponse`, `DescribeFleetsCommandInput`, `ActionProcessor`, `NumberValue`, `LineGraphicsOptions`, `WebSocket.MessageEvent`, `NextHandler`, `IAdapter`, `ReactiveChartStateProps`, `StrictValidator`, `ValidatePurchaseAppleRequest`, `DeleteDatasetCommandInput`, `ChipCollection`, `UnixTimestamp`, `PSTTableItem`, `VerdaccioConfig`, `ExecutionItem`, `OpenApiPersistedSchema`, `ts.PrefixUnaryExpression`, `RequiredValidator`, `Purchase`, `CompiledQuery`, `BaseRouteName`, `IDefinition`, `AlainSFConfig`, `PublisherSummary`, `ChartData`, `IStopwatch`, `SRT0_TexData`, `instance`, `MockProxy`, `PositionData`, `OnPostAuthResult`, `LogoActionTypes`, `PackageDependency`, `Publisher`, `PlatformNode`, `ParseSpan`, `ByteArray`, `AsyncBarrier`, `MmpService`, `_`, `Aes128Key`, `LogStructuredData`, `DataPointPosition`, `StaticMeshAsset`, `JsonRpcResult`, `IInterceptorOptions`, `_1.Operator.fλ`, `ReadOnlyAtom`, `ScreenDimension`, `IHandlerParameters`, `SignedStateVarsWithHash`, `model.Model`, `TPositionPair`, `Flatten`, `ReportingConfig`, `AudioOutputFormatImpl`, `ComponentSymbolTable`, `LeaguePriceSource`, `CircularAxisData`, `SupportedBodyLanguage`, `EntityConfig`, `PythonShell`, `CirclinePredicate`, `FilterRequest`, `UploadData`, `AlphaDropoutArgs`, `LoginToken`, `DocumentCollection`, `PreprocessorSync`, `IExchangeInfo`, `MakiObject`, `BackendValues`, `DAL.KEY_TAB`, `SlideProps`, `i.Node`, `OnProgressCallbackFunction`, `StringValidator`, `ITokenResponse`, `HashMap`, `IGBInstance`, `EidasRequest`, `CommentModel`, `PackInfo`, `HeaderProps`, `GlobalAveragePooling1D`, `IPhysicsMaterial`, `OverlappingPathAnalyzer`, `RelaxedPolylinePoint`, `ComplexSelector`, `TxHelper`, `CoreURI`, `VMLDOMElement`, `requests.ListJobShapesRequest`, `StacksKeys`, `PopperProps`, `ActiveModifiers`, `StateDto`, `SimpleStatementContext`, `UndoManager`, `OptionalEntry`, `TypeDisplayOptions`, `OpcuaForm`, `SFCDiffWatcher`, `ManagedListType`, `StorageEngine`, `FormDialogService`, `ParticipantItemStrings`, `V1WorkflowStepInputModel`, `MessageLogger`, `BodyComplexClient`, `ReferencesNode`, `EC_Public_JsonWebKey`, `messages.Ci`, `DnsRecord`, `BadgeButtonWidget`, `ListChannelsResponse`, `CombatStats`, `ServiceProviderAdapterMongoService`, `ScaleGamma`, `TimeIntervalTriggeringPolicyConfig`, `DefaultPass`, `IColorValueMap`, `IceState`, `IValidationSchema`, `MgtFlyout`, `IBasePickerSuggestionsProps`, `BN.Value`, `ReportStoreService`, `NavLocation`, `Highcharts.ClusterAndNoiseObject`, `ServerEntry`, `CreateTagsRequest`, `megalogDoc`, `anyNode`, `StateNamespace`, `AccessLevel`, `anchor.Wallet`, `NzConfigService`, `markdownit`, `BaseException`, `LangiumSharedServices`, `Emission`, `AccessTokenData`, `IAggregateConfiguration`, `HttpFetchOptions`, `SerializedSourceAnalysis`, `ListDatasetsCommandInput`, `ProxyGroup`, `LabelEncoder`, `PlotlyLib`, `PayloadTooLargeError`, `SpecQueryModelGroup`, `XColorsTheme`, `ATN`, `StructureCollection`, `SVGProps`, `MessageImages`, `core.ETHVerifyMessage`, `AtomicAssetsNamespace`, `StackStatus`, `TypeOptions`, `IInviteGroupUsersResult`, `AllureStep`, `CertificateProfileType`, `MockTemplateElement`, `PluginRevertActionPayload`, `android.view.ViewGroup`, `Topic`, `TabWatcher`, `SemicolonClassElement`, `NgbDate`, `AllSeries`, `ServerService`, `FileElement`, `DaffCartCoupon`, `SubTrie`, `CreateOrganizationCommandInput`, `LayoutProps`, `PickerInput`, `UIViewControllerTransitionCoordinator`, `ClassNameMap`, `NodeGraphicsItem`, `ConnectionProvider`, `ComponentChild`, `NativePlatformResponse`, `NgGridItemEvent`, `EdmxProperty`, `ActionReducerMapBuilder`, `VisitFn`, `NSFileManager`, `DebugVariable`, `UpdateContent`, `A4`, `CollectorFilter`, `ChildBid`, `PathNodeData`, `WhitelistInstance`, `IHttpGetResult`, `InsertWriteOpResult`, `DataMaskCategory`, `IEditorController`, `PatternCaptureNode`, `WorkspaceHeader`, `DataClient`, `IKsyTypes`, `TaskConfigurationModel`, `ProcessRequirementModel`, `HappeningsValidationOutcome`, `ObservableArrayProxy`, `BuildRequestOptions`, `GlobalEventModel`, `StorageLocation`, `IFB3Block`, `AppearanceService`, `AvailabilitySlot`, `TsSelectionListComponent`, `XActionContext`, `RuntimeConfig`, `TsLinter`, `StreamModule`, `StateInline`, `GridRenderCellParams`, `Round`, `requests.ListJobsRequest`, `Play`, `ReturnTypeInferenceContext`, `GraphQLConnectionDefinitions`, `FactoryFn`, `ServiceRequest`, `Func`, `IQueryCondition`, `DeleteScheduleCommandInput`, `PrefixLogger`, `hubCommon.IRevertableTaskResult`, `StackCardInterpolatedStyle`, `Nameable`, `ServiceRoom`, `CSSObjectWithLabel`, `AppExecution`, `IStop`, `FileRelativeUrl`, `ModelFactory`, `SafeUrl`, `FieldTransformConfig`, `ElementX`, `TickLabelBounds`, `ColumnPreset`, `IDockerImage`, `SerializableResponse`, `Asserts`, `EventRecord`, `XData`, `ExtraPost`, `SMTLet`, `IProgressReporter`, `DateTableContext`, `ComponentCompilerLegacyConnect`, `ListPipelineExecutionsCommandInput`, `Cascade`, `IRoom`, `ProxyAgent`, `ServiceIdRequestDetails`, `Allocation`, `SqlTuningTaskSqlDetail`, `CRUDEngine`, `DocumentGeneratorItem`, `AngularPackageLoggerMessage`, `LoggingMetaData`, `IrisIcon`, `CreatePackageCommandInput`, `MockedLogger`, `ExtraOptions`, `LambdaService`, `IDragCursorInfos`, `CandidatePair`, `RoomPosition`, `LogicNode`, `WorkspaceRepo`, `TransactionExplanation`, `ICardProps`, `VersionData`, `ISavedVis`, `TestCaseSetup`, `TurtleBuilder`, `ProsemirrorAttributes`, `JLCComp_t`, `DejaColorFab`, `ClassSession`, `SyntheticPerformanceMetrics`, `DocumentRecord`, `HeadersInit`, `PointEditOptions`, `SharedService`, `Listable`, `ExportKind`, `ExportsAnalyzerResult`, `ResponderModel`, `TempoEvent`, `CreatorBase`, `RxnArrow`, `Multiply`, `MockSetup`, `FormatFunc`, `ThyTooltipConfig`, `ArmResourceDescriptor`, `GoodGhostingInfo`, `NetworkAddress`, `ReportingAPIClient`, `NodeInterface`, `PermissionType`, `CommentReply`, `AppLeaveHandler`, `GroupedPriorityList`, `Amplitude`, `vscode.DocumentSymbol`, `RowValidatorCallback`, `NgWalkerConfig`, `DkrObject`, `ConnectionErrorCode`, `SpeechTranslationConfigImpl`, `WebSiteManagementModels.AppServicePlan`, `TimeChangeEvent`, `ListrTaskWrapper`, `requests.ListDbSystemPatchesRequest`, `EmitTextWriter`, `EditableRow`, `DocumentViewResponse`, `HttpAdapter`, `apid.ThumbnailId`, `ShipData`, `BoxSliderOptions`, `TabType`, `MetadataInfo`, `Highcharts.QuadTreeNode`, `ElasticLoadBalancingV2MetricChange`, `NgModuleRef`, `AggsCommonStart`, `UntypedBspSet`, `PhysicalQueryPlanNode`, `AddUserCommand`, `FunctionConfig`, `Transition`, `ReduxCompatibleReducer`, `FieldDefinitionNode`, `AESEncryptionParams`, `TombFinance`, `Html5QrcodeSupportedFormats`, `SdkIndexFrame`, `e`, `OrchestrationVariable`, `HTMLIonMenuElement`, `CreateDeploymentResponse`, `BaseApi`, `FreezeObject`, `AliasDeclaration`, `CreateAppCommandInput`, `RoleHTMLProps`, `PortRecordType`, `EnrichedDeprecationInfo`, `TestObject`, `CommonNode`, `MDCButton`, `Tremolo`, `AtomicMarketContext`, `Id64Set`, `OidcCtx`, `VariableUse`, `IModel`, `DiagnosticCategory`, `DescribeChannelModeratedByAppInstanceUserCommandInput`, `EventFacade`, `CounterProps`, `SymbolFlags`, `PartialSequenceLength`, `AdvancedSettings`, `GitlabUser`, `UIToast`, `DebtTransaction`, `DateRangeBucketAggDependencies`, `DirectChannel`, `DensityBuilder`, `E2EPageInternal`, `TraceNode`, `StoreModule`, `PublicationDocument`, `ColumnRefContext`, `AdEventListener`, `BeanWrapper`, `NextFunction`, `JWTVerifyResult`, `ITrackItem`, `MatcherState`, `SlashParams`, `ContentItem`, `TInterval`, `GnosisExecTx`, `SolidLineMaterial`, `Inode`, `IdTokenResult`, `IArtifact`, `TriggerApexTests`, `ResDicEntry`, `CtrFail`, `ApexExecutionOverlayAction`, `SearchQueryProps`, `VisualizeEditorVisInstance`, `SurveyPropertyEditorBase`, `alt.Vector3`, `YamlParser`, `CodeActionContext`, `ObjectProvider`, `DBContext`, `ValueService`, `EventManager`, `RequesterType`, `PanelHeaderProps`, `G6Node`, `LemonTableColumn`, `SwaggerJson`, `CoapMethodName`, `ChartwerkTimeSerie`, `WaitingThreadInfo`, `BezierCoffs`, `NotExpression`, `vscode.CodeAction`, `AdjacencyGraph`, `Ability`, `SchemaRefContext`, `MapboxGeoJSONFeature`, `FixtureLoader`, `Tokenizer`, `RouterReq`, `AudioDescription`, `UpdateDashboardCommandInput`, `WebDriver`, `ItemService`, `INotificationTemplate`, `CurrencyFormatOptions`, `VersionedTextDocumentIdentifier`, `AzHttpClient`, `PlacementContext`, `Hash256String`, `It`, `RestManager`, `ExpressionValueSearchContext`, `IHeader`, `CreateAccountParams`, `ListChangeSetsCommandInput`, `DtlsRandom`, `BackupPolicy`, `CkbBurn`, `PageNode`, `TabComponentProps`, `Course`, `TreeNodeLocation`, `IExternalStorageManager`, `Adapter`, `DeleteResourcePolicyRequest`, `DetectorBuilder`, `zowe.IDownloadOptions`, `alt.IVector3`, `MapOf`, `LifecycleSettings`, `QuerySet`, `BaseListParams`, `ImageResult`, `EntityLike`, `PagerCell`, `SlpRefType`, `StaticRegion`, `SchemaAttributeType`, `FeatureUrl`, `Labor`, `Sponsor`, `DeleteProjectRequest`, `HyperlinkProps`, `UpdateStudioCommandInput`, `LegacyDrawing.Sprite`, `Stitches.ScaleValue`, `ComponentBed`, `OpCode`, `PreviewDataApp`, `ODataActionResource`, `InventoryFilter`, `MatchSpecific`, `IHttpFetchError`, `CompositePropertyDataFilterer`, `Shift.Node`, `ReviewerStatisticsState`, `d.FsReadOptions`, `requests.ListAlarmsRequest`, `DMMF.ModelAction`, `EditOptions`, `requests.ListVirtualCircuitsRequest`, `Spread`, `CreateProfileDto`, `ListAutoScalingConfigurationsRequest`, `IControllerAttribute`, `CreateDatasetCommandOutput`, `BitcoinjsNetwork`, `InputListProps`, `Elevation`, `ListCertificatesResponse`, `ModalComponent`, `BinaryNode`, `UseQueryReturn`, `TileMatrixSet`, `XRInputSource`, `MultiCommandCCCommandEncapsulation`, `ConnectionContracts.ConnectParams`, `AugmentedAssignmentNode`, `IClassify`, `InMemorySpanExporter`, `PostRequest`, `ListObjectsV2CommandInput`, `DataClassBehaviors`, `Lang`, `QueueItem`, `VcsRepository`, `EnhancedEmbeddableContext`, `NotificationData`, `XPCOM.nsIDOMWindow`, `ThunkAction`, `EditorStore`, `OptimizeCssInput`, `ProviderInfo`, `TableCell`, `FnArg`, `PageContent`, `Serializable.GraphSnapshot`, `ListDatasetsCommandOutput`, `WatcherFolderMap`, `TJSONObject`, `ProofreadRuleMatch`, `FunctionAppStack`, `OpIterator`, `d.PrintLine`, `OAuthRequest`, `ProductCategoryService`, `Trash`, `InternalContext`, `ICircuitGroup`, `ProjectDataManager`, `MaterialEntry`, `NamedImportsOrExports`, `OnExistingFileConflict`, `Test.TestLogType`, `GfxrPass`, `ApolloError`, `SvelteComponent`, `TaskBase`, `MoveTree`, `ethers.providers.Provider`, `DaffCategoryReducerState`, `Datepicker`, `SpatialStandupRoom`, `ComponentAst`, `Serverless`, `SM`, `DownloadTask`, `DetailedPeerCertificate`, `DevToolsExtensionContext`, `LowLevelResponse`, `ProofAttributeInfo`, `KeyResult`, `ComponentsState`, `UserDataPropertyAPI`, `QueueMap`, `UInt16`, `OsmConnection`, `HookResult`, `ThermostatFanModeCCReport`, `PlatformTypes`, `DeployedServicePackage`, `JobQueue`, `ActionTypes`, `XcodeProject`, `WebsocketData`, `CalibrationResponseAction`, `VisualizeEmbeddableFactoryDeps`, `Generate`, `EditDoc`, `ShapeConfig`, `Chance`, `TinyPg`, `ScannedBehavior`, `TweetEditorState`, `Blockchain`, `MappedTypeDescription`, `WatchDecorator`, `GuaribasUser`, `BroadcastService`, `DeploymentEnvironment`, `InstructionData`, `IMutableQuaternion`, `StatusUpdate`, `DiscordooError`, `ParticipantContents`, `JDesign`, `SymlinkCache`, `EthereumPaymentsUtilsConfig`, `OperationParameter`, `core.IRawOperationMessage`, `NextServer`, `IRenderer`, `GalleryProps`, `BooleanCV`, `GlobalCoordinates`, `Verifier`, `IBaseImageryMapConstructor`, `CollectionData`, `HsAddDataService`, `SavedObjectsBulkUpdateObject`, `MediaStreamConstraints`, `DatePipe`, `TIn`, `NavigationContainerRefWithCurrent`, `IntersectionTypeNode`, `ViewNode`, `Stripe`, `IWriteAbleSetCombination`, `BlockCache`, `OptionsStruct`, `MyCompanyRowConfig`, `ComparisonResult`, `AbstractRegisteredService`, `moneyMarket.oracle.PricesResponse`, `HTMLIonItemElement`, `IMutableCategorizedPropertyItem`, `Folded`, `NamespaceImport`, `DefaultContext`, `Cypress.PluginConfigOptions`, `SolutionBuilderHost`, `InputTimeRange`, `CancelExportTaskCommandInput`, `DialogDelegate`, `PlayerOptions`, `ParseTree`, `IProcFSEntry`, `Coverage`, `WorkingDirectoryStatus`, `InferredProject`, `StoneTypeArray`, `WorkItemTypeField`, `FsFiles`, `GroupName`, `GfxWrapMode`, `ObjectLiteralElement`, `IElement`, `TiledMap`, `ObjectTracker`, `common.RegionProvider`, `LoggableTarget`, `DriveFile`, `requests.ListDedicatedVmHostShapesRequest`, `HostInstanceMap`, `ProxyIntegrationTester`, `GetEndpointCommandInput`, `ReindexActions`, `CollectionMetadata`, `SyntaxTree`, `DocInstance`, `LineUpJSType`, `ListDataViewsCommandInput`, `TabsListProps`, `IPromiseRetry`, `CollectionEvent`, `CellTile`, `SampleAt`, `IEmployeeJobsStatisticsResponse`, `WalletConnect`, `FnN3`, `API`, `INodeTypeDescription`, `TranslateAnswerConfig`, `VisiterOption`, `TxEventContext`, `TestAnalyzer`, `IBeacon`, `ProjectUser`, `BuildHelpers`, `ManualServerConfig`, `IStartTsoParms`, `DNSPacket`, `GatewayShardsInfo`, `ResourcePermission`, `BasePlacement`, `ReconnectionOptions`, `FieldConfigData`, `ExploreStateModel`, `Services.Plugins`, `KV`, `SourceMapSource`, `Canonizer`, `TEntity`, `NmberArray16`, `StoreMap`, `JSON5Config`, `DAL.KEY_6`, `ColorScaleInfo`, `Command.Command`, `MediaFormat`, `SVGAttributes3D`, `ArrayDiff`, `PostData`, `TProductFilter`, `MultilevelSwitchCCStartLevelChange`, `SimpleFunction`, `AbbreviationAttribute`, `CreateServiceRequest`, `IntervalJobOptions`, `CSVOutput`, `WithMetadata`, `CipherContext`, `TokenLocation`, `EphemeralTaskLifecycle`, `CallContext`, `MonoRepo`, `StackDataValues`, `PiEditProjectionItem`, `TokenList`, `ClassInfo`, `QueueStorageContext`, `TestingAggregate`, `MetaLogger`, `ANGLE_instanced_arrays`, `RadixParticle`, `BaseProtocolLabware`, `PackageConfig`, `LightGallery`, `ImageType.StyleAttributes`, `OrderTemplatesOverviewPage`, `Radian`, `IAnswer`, `ValueTypes`, `EntityConfiguration`, `TemplateService`, `DeleteAccountsRequestMessage`, `Effects.SpriteEffect`, `VirtualDocument`, `FsFile`, `SVGNodeAttribute`, `DereferencedSchema`, `NonRelativeModuleNameResolutionCache`, `StreakItem`, `ViberTypes.MessageOptions`, `Water`, `ASTVisit`, `JobLifecycleState`, `Types.KafkaConsumerMessageInterface`, `SupCore.Data.EntryNode`, `ts.StringLiteral`, `AT`, `NgOpenApiGen`, `GlobalService`, `LookupResult`, `TransitionCheckState`, `ImportNamespace.Interface2`, `UndoState`, `SignalData`, `IconSvg`, `CardName`, `ListSchemasCommandInput`, `apid.DropLogFileId`, `StatusCodeCallback`, `ListComprehensionIterNode`, `Registrar`, `ObjectNodeParams`, `PatternClassNode`, `JobSummary`, `doctrine.Type`, `requests.ListNetworkSecurityGroupsRequest`, `ModelManager`, `AriaProps`, `IPluginSettings`, `immutable.Map`, `IVirtualDeviceValidatorResult`, `KintoResponse`, `PatternMatch`, `AttrValue`, `IAtom`, `DataSourceConfig`, `PoseNetOutputStride`, `ExtendedError`, `CountBadgeProps`, `ImportKeyPairCommandInput`, `Rate`, `GeoUnitsForLevel`, `ALSyntaxWriter`, `Joint`, `HostRuleHeader`, `ChaincodeResponse`, `IMinimatch`, `NonPayableTx`, `MutableGeoUnitCollection`, `Questions`, `OpenSearchResponse`, `FirstValue`, `AssembledObject`, `DefinitionLocation`, `Conv2D`, `IOrganizationSet`, `Type.TPowerQueryType`, `CardInGame`, `drawerModule.RadSideDrawer`, `ContentPage`, `ThirdPartyCapDescriptor`, `Event_2`, `SimpleTypeMemberNamed`, `SignatureHash`, `SavedObjectWithMetadata`, `ParsedStringPattern`, `SuggestChangeHandler`, `APIError`, `SortKeyParams`, `ButtonStyleProps`, `PaymentV2`, `EmailOptions`, `TypePackage`, `GaxiosError`, `IntelChannel`, `ManagementAgentPluginGroupBy`, `DangerDSLType`, `android.graphics.Bitmap`, `requests.ListDbSystemPatchHistoryEntriesRequest`, `NavLinkWrapper`, `Terminal`, `scriptfiles.ASModule`, `LoadingLastEvent`, `UpdateModelCommandInput`, `QuantumElement`, `PositionSide`, `TransformSchemaOptions`, `DateService`, `SlashCommandContext`, `StreamClient`, `GlobalToModuleMapping`, `CSharpProperty`, `NetworkVirtualAppliance`, `IUserFilterDefinition`, `TweenFunc`, `ListContentsCommandInput`, `PackageJsonDependency`, `BitBucketCloudAPI`, `SelectPopoverOption`, `SpyObj`, `ListChildComponentProps`, `LetterStyle`, `RGBColorType`, `AnyXModule`, `Wnd`, `JIntersection`, `FactoryUDFunction`, `WrappedLiteralValue`, `FileLock`, `DebouncedState`, `EnumType`, `ExecutionDriver`, `SrcDecoder`, `TableDistinctValue`, `KeywordDefinition`, `LookupInResult`, `Survey`, `ValidatedBatchConfig`, `ISceneLoaderAsyncResult`, `StackProperties`, `Types.EditableTitleState`, `HLTVPage`, `HistoryItem`, `AirPacker`, `HighlightItem`, `SIZE`, `EmbeddableFactory`, `DeleteFn`, `GfxBindingLayoutSamplerDescriptor`, `PSTFile`, `VisualizationsSetup`, `IProgress`, `DetailedOrganizationalUnit`, `PropFunctionSignature`, `vscode.CodeActionContext`, `ProposalData`, `Messages`, `BotCursorType`, `AudioTrack`, `Pump`, `pxtc.SymbolInfo`, `LinuxParameters`, `ITableSchema`, `DictMap`, `ComponentTemplateListItem`, `PluginHostProps`, `HammerInput`, `WatchCallback`, `AvatarProps`, `ZoomTransform`, `ICalculatePagingOutputs`, `Github`, `Secret`, `RNNLayerArgs`, `LocalProps`, `NumberLiteralExpr`, `GenericRequestHandler`, `AppResult`, `TasksActionTypes`, `ResponderExecutionStates`, `CustomResource`, `DocItem`, `Item`, `Object3D`, `RxSocketioSubjectConfig`, `UIHelper`, `EditTransformFlyoutState`, `ThyClickPositioner`, `PluginRemoteLoadZipOptions`, `HostItem`, `tr.actions.Args`, `IStackTokens`, `PreciseNumber`, `ParseLocation`, `ListCore`, `ValueGetterParams`, `ThyListOptionComponent`, `MapEnv`, `ParamSchema`, `BinaryBody`, `DescribeMaintenanceWindowExecutionTasksCommandInput`, `Stringer`, `CandidatesService`, `CommitSequence`, `Ora`, `CustomPluginOptions`, `ImportedModuleDescriptor`, `TGen`, `IRequestQueryParams`, `BatchChain`, `IRendererOptions`, `GitHubItemSubjectType`, `ITemplateItem`, `AlertSummary`, `ForwardingState`, `KeysState`, `RealTestServer`, `TextWrapper`, `interop.Reference`, `ScreenCollisions`, `DAO`, `GetPointFieldFn`, `requests.ListSubnetsRequest`, `IStreamInfo`, `GoogleFont`, `Animated.Adaptable`, `ResourceDoc`, `TrackedPromise`, `StoryGetter`, `CentralTemplateProvider`, `NativePointer`, `CompiledPath`, `IMdcTextFieldElement`, `EmitResolver`, `AsyncCache`, `OrganizationAccountConfig`, `VertexAttributeEnum`, `ThyAutocompleteConfig`, `NodesVersionCompatibility`, `ExplorationInfoParameter`, `MindMapModel`, `SummaryPart`, `BarGroupValue`, `GitRepository`, `PropertyGroup`, `DataRequest`, `CustomRender`, `GetDocumentCommandInput`, `IMatrixEvent`, `ParsedCronItem`, `PartialList`, `ToolsSymbolInformationRequest`, `Exception_Type`, `IGetTasksStatistics`, `CmsModelFieldValidation`, `NgForm`, `LocationFeature`, `CodeMirror.EditorConfiguration`, `StrategyOptions`, `UncommittedChangesStrategy`, `AbstractModelApplication`, `WebSocket.Server`, `TableModel`, `DerivationPath`, `BuildSupport`, `S3.PutObjectRequest`, `ListResolverEndpointsRequest`, `CsvParserStream`, `MappedDataRow`, `JassPlayer`, `IActionContext`, `ComponentSlotStyle`, `vec2`, `VisualizePlugin`, `NcTab`, `XTransferNode`, `ILiquidationCandidate`, `ITaskRepository`, `CreateProjectCommand`, `worker.IWorkerContext`, `GraphContract`, `DropDownProps`, `ElementFound`, `forge.pkcs12.Pkcs12Pfx`, `Persona`, `Meta`, `ImportType`, `SearchCondition`, `CoerceFunc`, `EntityUID`, `CanvasElementModel`, `TestResults`, `IdentifierDecorator`, `StudioState`, `ScreenService`, `TResolvedResponse`, `AddUpdatesEvent`, `ControlActivity`, `DownloadResponse`, `InstallVirtualAppParams`, `WebampWindow`, `BlocksInfo`, `ListGroupsCommandInput`, `HistoryEntry`, `IDownload`, `InvalidateOptions`, `Rendered`, `RolesFunc`, `AbiInput`, `NgControl`, `Endomorphism`, `BaseMaterial`, `TypescriptEditorPane`, `IPersonalizationSurveyAnswers`, `RegExpLiteral`, `MediaRequest`, `MonsterArenaStats`, `NumberExpression`, `Connections`, `DatabasePoolConnection`, `LibraryStoreItem`, `EventToPrevent`, `URL_`, `AstModuleExportInfo`, `Nth`, `HttpHeader`, `ReportingStore`, `RetryData`, `GetConfigurationSetEventDestinationsCommandInput`, `SplineRouter`, `WebService`, `QTransition`, `ChaCha20Poly1305`, `PeerConfig`, `SyncValidation`, `TensorBuffer3D`, `MaybeAsync`, `FunctionService`, `AppsCommands`, `NumberInputProps`, `CountState`, `AutoRestGenerateResult`, `PlotLineOrBand`, `JSONResponse`, `DateTimeParts`, `TxBuilder`, `LibrariesBuilder`, `DBSchema`, `PathDescription`, `OverlayEventDetail`, `ProjectLanguage`, `X12SerializationOptions`, `ValueClickActionContext`, `IPeacockElementAdjustments`, `CheckboxProps`, `BuildWatchEmitter`, `DayModifiers`, `ISavedObjectTypeRegistry`, `PrimAction`, `Question`, `SagaEnv`, `DataToGPUOptions`, `PositioningPlacement`, `StorageEntry`, `StoreOrStoreId`, `GfxRenderCache`, `SymbolDisplayPartKind`, `ForegroundContexts`, `DifferentHandlerParam`, `S3Action`, `BitcoinUnsignedTransaction`, `ControllerSessionScope`, `ClientEngineType.Library`, `Scripts`, `StagePanelDef`, `TransactionInstruction`, `SuccessfulResponse`, `ParserTreeValue`, `ExpoConfigFacebook`, `OPCUAClient`, `ISwaggerizedRouter`, `TMessage`, `SetupDeps`, `JsonExpr`, `Sentence`, `ListPackagesCommandInput`, `ast.FunNode`, `PsbtTxInput`, `PortalCommunicator`, `DatasetLocation`, `Contents.IModel`, `ExpressionFunctionVarSet`, `LocalForageWithObservablePrivateProps`, `ChildField`, `FormComponent`, `IUpSetDump`, `StagePanelType`, `UseSelectProps`, `BlobStorageContext`, `MockObject`, `VpnSite`, `requests.ListSuppressionsRequest`, `WebAppCreateStack`, `UserResolvable`, `CollectedData`, `TestElementRefersToElements`, `IncrementalElement`, `IOption`, `ApmBase`, `ReactiveCommand`, `MerchantOrderEntity`, `WeaveResult`, `JhiDataUtils`, `KeyAttribute`, `LayoutVisualizationGroup`, `RTCRtpReceiveParameters`, `CollectionCompilerMeta`, `Filesystem.ReadJsonAsync`, `PageOptions`, `SiteListItem`, `ActivityState`, `AuthorizationContext`, `crypto.Hash`, `ReadableOptions`, `IComboBoxOption`, `Swatch`, `ListValue`, `IMenuProps`, `EdgeCalculatorDirections`, `Keypair`, `StylusState`, `GeometryProvider`, `PartitionHash`, `InventoryPlug`, `ContextView`, `CopyButtonProps`, `GSConfiguration`, `knex.Raw`, `ClipVector`, `GrowableXYArray`, `FileSystemCallback`, `StaticTheme`, `NestedRecord`, `pulumi.CustomResourceOptions`, `Strategy`, `LogAnalyticsMetaFunctionArgument`, `IProseMirrorNode`, `PresetInfo`, `ExtractRouteParams`, `CesiumEvent`, `RawModule`, `requests.CancelWorkRequestRequest`, `FileMatcherPatterns`, `DefinitionInfo`, `Parser.Tree`, `SimpleASTSeq`, `WrappedCodeBlock`, `INotificationDocument`, `MapperOptions`, `FileUri`, `Report`, `StackRootNavContext`, `ArticleFormat`, `ScryptedDeviceType`, `Initialization`, `Authenticator`, `MergeTree.Marker`, `EntityCollectionRecord`, `GRU`, `NgSelectComponent`, `EmulatorContext`, `AnyOf`, `IExportOptions`, `NodeStructure`, `CreateNoticeDto`, `StyleConfig`, `interfaces.Request`, `RegionTag`, `HeadersJson`, `TimeOffsetInterval`, `WindowWrapper`, `EditorGroup`, `TemplateTransformerData`, `TTheme`, `ThrowAsyncIterable`, `EnumStringHelper`, `FormatTraits`, `IWebhookRequest`, `CapabilitiesResponseWrapper`, `EngineDefaults`, `ParticipantsAddedEvent`, `Res`, `TEName`, `MsgCreateLease`, `AlignmentFactory`, `ContainerFlags`, `BigQueryRequest`, `DebugSessionOptions`, `DynamicAttrs`, `AgentConfig`, `ElementEventCallback`, `RequestProvider`, `IntrospectionInputObjectType`, `MappingTreeItem`, `MeshFragData`, `IteratorContainer`, `EnhancedGitHubIssueOrPullRequest`, `ScopedLogger`, `CardRenderItem`, `BuildContext`, `GroupOrOption`, `CharRangeSection`, `OpenSearchDashboardsServices`, `SearchIssuesAndPullRequestsResponseItemsItem`, `Type_Enum`, `ControllerStateAndHelpers`, `SpeechConfigImpl`, `MemberDef`, `CodeMirror.EditorChange`, `StorageValuesV7`, `ICalendarEventBase`, `DataCollection`, `Cmp`, `SQLFragment`, `MonikerData`, `SaveType`, `IbkrEvents`, `MobileService`, `GetRowHeightFn`, `NugetPackageTableFields`, `PartitionSpec`, `MatBottomSheetContainer`, `CheerioFile`, `ContractRegister`, `Backoff`, `InHostPacket`, `SwiftProperty`, `BaseModel`, `MediaModule`, `ValueFn`, `ValidatorData`, `PlotSpec`, `requests.ListClusterNetworkInstancesRequest`, `commonServices`, `BaseLogger`, `SimpleRect`, `ISelectorMap`, `nsIDOMNode`, `CommandInfo`, `PixelRendr`, `TwistyPropDebugger`, `GetEmailIdentityCommandInput`, `strtok3.ITokenizer`, `RemirrorJSON`, `TestOperation`, `Departement`, `CodeGenerator.Params`, `IExcerptTokenRange`, `SelectorParser.Node`, `AxiosRequestConfig`, `azureTable.Table`, `Highcharts.AnnotationEventObject`, `CoverageOptions`, `JackettFormattedResult`, `PlatformLocation`, `RouteExecutionFromInput`, `ResultAccumulator`, `PropType`, `PoiInfo`, `Handlers`, `MDCRippleAdapter`, `DeserializationOption`, `NavNodeManagerService`, `ControlPanelSectionConfig`, `IGlobalState`, `TreemapSeries.NodeValuesObject`, `CommittedFileChange`, `ElementCoreContext`, `FileAnnotationType`, `TimeBuckets`, `SelectablePath`, `OES_vertex_array_object`, `RequestMethod`, `NetworkOptions`, `SerializedCard`, `GitStashReference`, `HemisphericLight`, `LexoDecimal`, `AuthenticateCustomRequest`, `LogicalKeyboardKey`, `VLC`, `PossiblyAsyncHierarchyIterable`, `ReadonlyPartialJSONObject`, `GX.Register`, `FaceInfo`, `DefaultEmitOptions`, `ISignature`, `PointObject`, `UpdateQueryNode`, `AggParamsState`, `DataStoreTxEventData`, `Highcharts.AnnotationChart`, `RequestContext`, `RawResponse`, `CrawlerRunOptions`, `PartyData`, `TalentMaterial`, `SipgateIOClient`, `IBaseRequest`, `EcsMetricChange`, `ElementAttrs`, `FileSearchCriteria`, `TransactionAsset`, `BufferStatusResult`, `TextmateSnippet`, `SpawnPromise`, `WithReservedWord`, `ISpriteMeta`, `Weekday`, `PickerComponent`, `ImportObject`, `BtnProps`, `EntityEvictEvent`, `ResponseModel`, `ILendingPool`, `Call_SendResultsTo`, `ISurveyObjectEditorOptions`, `ConfigObject`, `ObjectLike`, `DataClassEntry`, `AdShowOptions`, `Associative`, `RiskElementModel`, `GadgetInstanceService`, `defaultFontWeights`, `ExtHostCommentThread`, `WeekDayIndex`, `Gaxios`, `DescribeJobExecutionCommandInput`, `IGameEditorContext`, `DecoratorObject`, `MagitBranch`, `CurrencyAmount`, `EmployeesService`, `IMinemeldStatusNode`, `ReactApollo.OperationOption`, `History`, `SteeringPolicyPriorityAnswerData`, `mdast.Root`, `NowFile`, `kms.KmsManagementClient`, `ContainerModule`, `TestConsumer`, `ImportSpecifier`, `StatedBeanMeta`, `DunderAllInfo`, `ExtractScript`, `HighlightedType`, `Utility`, `Talk`, `Forest`, `ParsedSystem`, `DamageTypeData`, `Policy`, `android.bluetooth.BluetoothGatt`, `RoomPartialState`, `FileStore`, `CalibrationLabware`, `FunctionKey`, `DomainDeprecationDetails`, `NamedNodeMap`, `Namer`, `ExportDataType`, `SearchResultPage`, `JQueryPromise`, `TContent`, `SwitchContainerProps`, `DColorButton`, `CustomerData`, `PullToRefresh`, `MockDynamicContent`, `WeakStorage`, `AreaSeriesStyle`, `JSX.TargetedMouseEvent`, `SingleSelectionSet`, `CartProduct`, `PlayerStateService`, `DecompilerEnv`, `p5ex.CleanableSpriteArray`, `MicrosoftSqlServersResources`, `NodeBase`, `LineCounter`, `PartOfSpeech`, `IPostMessage`, `IVueAuthOptions`, `BindingEditorContextInfo`, `UserListQueryDto`, `VirtualDeviceScript`, `TranslaterPoint`, `ConflictState`, `AnnotationConstructor`, `ECompareValueType`, `IRowDetails`, `IGistMeta`, `AppSettingsService`, `Solution`, `IStorageWrapper`, `DOMRect`, `SpeedKmH`, `TokensService`, `AuthorizationRequest`, `EmotionCanvasTheme`, `DatabaseConfig`, `DataProxy`, `ActualT`, `TransactionState`, `MatButton`, `Uploader`, `PlayerActions`, `RunOutput`, `QueryArgs`, `WebGLTexture`, `LooseValidator`, `ESLintExtendedProgram`, `m.VnodeDOM`, `LifecyclePeer`, `UIPickerView`, `NuclearMeta`, `TraceEvent`, `CoverageFragment`, `ColorStyleProps`, `PasswordBasedPreset`, `CalcIdxType`, `AdadeltaOptimizer`, `ProfileOrganizationResponse`, `Trace`, `ConnectionFormService`, `WhereOptions`, `IChildNodesMap`, `ReactionType`, `BrokenConeSide`, `TestInvokeAuthorizerCommandInput`, `GraphqlConfig`, `NamedObjectDef`, `FFMpegInput`, `StrapiAttribute`, `DefaultResourceOptions`, `ConfigEnv`, `RollupTransition`, `DeleteConnectionCommandInput`, `KeyToDiffChangeMap`, `StorageProxy`, `TestingEntity`, `TenantService`, `RunEveryFn`, `Rtcp`, `TaskConfiguration`, `CornerFamily`, `JsonPatchOperationPathCombiner`, `MealForm`, `CompilerBuffer`, `ProvisionByoipCidrCommandInput`, `ts.MethodDeclaration`, `HttpTerminator`, `ResolveProvider`, `Actual`, `IBasicSession`, `N`, `AutoArchiveSettingsDelegate`, `BaseConvertService`, `ParseContext`, `IWorkflowDataProxyAdditionalKeys`, `RelayerRequestSignatureValidator`, `AndroidInput`, `DashboardTableData`, `OperationElement`, `ITransactionOption`, `SimulationOptions`, `IgAppModule`, `BorderStyle`, `GeoContainmentAlertParams`, `UriComponents`, `Evaluator`, `TNSDOMMatrix`, `RouterInfo`, `PatternOutput`, `ConnectorDeclaration`, `Role`, `JobExecutionSummary`, `SubscriptionsClientOptions`, `HttpMethods`, `VimValue`, `Accept`, `ImportResult`, `NglInternalDate`, `AnyParameterTypeDescriptor`, `IAssociationParams`, `CompilerEventDirAdd`, `EntityCacheQuerySet`, `AboutService`, `OnOptions`, `MigrationSummary`, `SavedVisInstance`, `ScenarioCheckResult`, `Fold`, `DataAction`, `DefinitionRowForInsert`, `requests.ListSecurityListsRequest`, `NotSkeleton`, `DynamoToPromise`, `AsObject`, `reflect.Assembly`, `WebCryptoPartialPbkdf2`, `Return`, `FileIncludeReason`, `AndroidActivityResultEventData`, `StubbedInstanceWithSinonAccessor`, `DiffView`, `TreeNode`, `GradientColor`, `NodeLink`, `AWS.AWSError`, `LightGroup`, `Firestore`, `DaffCountryFactory`, `IDragData`, `CallControlOptions`, `PrintOptions`, `DictionaryQueryEntry`, `WorkspaceField`, `PeerSetup`, `ProtoServer`, `MediaKey`, `PutEventsCommandInput`, `ActiveEnvironment`, `THREE.Matrix4`, `StepOptions`, `Thumbnail`, `BufferTokenizer`, `ExpressionAttributeNames`, `ContentBuilder`, `ExtraCommandLineOptions`, `SavedObjectsCollectMultiNamespaceReferencesOptions`, `FillLabelConfig`, `PaneWidget`, `PartialQueryLang`, `Pizza`, `ListFilesResult`, `TsConfig`, `XYPoint`, `SelectBase`, `None`, `PiEditProjectionLine`, `SubEntityData`, `WorkspaceType`, `DMMF.Datamodel`, `FileStructureType`, `TerminalApiRequest`, `LoggerOptions`, `IActionCodeSettings`, `ResizerKeyDownEvent`, `SegmentType`, `OpUnitType`, `FullCertificationRequestDTO`, `FeeExtensionsConfig`, `PluginRegistration`, `V1Node`, `UhkModule`, `KeyEventLike`, `StoreDestinationArray`, `TokenDetailsWithCoingeckoId`, `ListImagesCommandInput`, `IAaveGovernanceV2`, `CreateAskDTO`, `AreaField`, `StoryArgs`, `Relations`, `ForgotPasswordAccountsRequestMessage`, `InputValue`, `LocatedError`, `Boundaries`, `MessageRenderer`, `RnM2`, `DependencyType`, `OP_PUSHDATA`, `ITriggerResultObject`, `CANNON.Body`, `DocgeniLibrary`, `ParseSchemaTypeInfo`, `ExpressResponse`, `ResInfo`, `TsRadioOption`, `DisputableVotingData`, `CompletionBatch`, `OasPathItem`, `WrappableType`, `ActionSpec`, `GetRepositoryPolicyCommandInput`, `PatternInfo`, `Utils.ITimeProvider`, `IdDTO`, `GeneratedIdentifier`, `vscode.MessageOptions`, `ConvexClipPlaneSet`, `array`, `EnrichedLendingObligation`, `SecureStorage`, `NoteItemDummy`, `AppStatus`, `SingleSegmentArena`, `BenefitMeasurementIndicator`, `StacksTestnet`, `Specie`, `RadarChart`, `DefaultComponent`, `ARMRomItem`, `CallSettings`, `ApplicationShell`, `EventListenerHandle`, `FeeMarketEIP1559Transaction`, `IPopoverProps`, `ThemeColor`, `EncryptContentOptions`, `AppointmentMoment`, `WorkspaceServiceInstance`, `angular.ui.IStateService`, `google.maps.DirectionsResult`, `NotebookCellOutputItem`, `firebase.firestore.Firestore`, `AppActions`, `Flag`, `CommitID`, `InjectableType`, `CullMode`, `StreamPresenceEvent`, `AccountApple_VarsEntry`, `FastifyReply`, `QueueInfo`, `PlaybackParticipant`, `FlexPluginArguments`, `ReadonlyPartialJSONValue`, `RemoteRepositoryRepository`, `TxnJsonObject`, `NetworkErrorType`, `TestReference`, `IndexPatternsContract`, `App.contentSide.ICommunicationToBackground`, `StyleDefinition`, `YDomainRange`, `MediaTrackCapabilities`, `MetricDataQuery`, `Switchpoint`, `CoreWeaponMode`, `IMeshPrimitive`, `IInsert`, `BatchNormalization`, `TestCase`, `ListFindingsCommandInput`, `DefaultMap`, `ButteryFile`, `WorkspaceSymbolCallback`, `request.Test`, `Text`, `BzlConfiguration`, `ReducerManager`, `UIFunctionBinding`, `DecoratorArg`, `ExtensionNodeAttrs`, `UpdateConfigurationCommandInput`, `DiagramMakerNode`, `Vuex.Store`, `nameidata`, `ApiContract`, `EntryControlCCConfigurationReport`, `ErrorDataKind`, `ProposalIdOption`, `CustomStyle`, `MovementItem`, `IEmployeeCreateInput`, `HttpResponseObject`, `TSTopLevelDeclare`, `ControlledComponentWrapperProps`, `PacketType`, `AudioConfigImpl`, `MessageCode`, `XmlEmptyBlobsCommandInput`, `IManagementApiClient`, `QueryLeaseRequest`, `PluginInsertAction`, `BabelPresetChain`, `StackSummary`, `IConfigFile`, `WebGPUBackend`, `GfxPrimitiveTopology`, `ICurrentUserState`, `ILocation`, `tf.Scalar`, `UVFile`, `TwistAction`, `ComponentTest`, `IOsdUrlStateStorage`, `MUser`, `ParsedIcons`, `ReplicationConfigurationReplicatedDisk`, `PluginOptions`, `ItemsService`, `WorkflowItemDataService`, `CountQueryBuilder`, `Time`, `TestCommand`, `DeleteServerCommandInput`, `ForwardingSpec`, `IActorContext`, `LambdaMetricChange`, `CssProperty`, `CookieJar`, `InviteMembersCommandInput`, `PasswordHistoryData`, `Dino`, `TextElementState`, `BuildInfo`, `ListFiltersCommandInput`, `BasicGraph`, `ServiceRoute`, `UIStorage`, `Initiator`, `http.ClientRequest`, `ISubnet`, `InternalServiceException`, `ISPRequest`, `GX.TexCoordID`, `DFA`, `StatementBodyContext`, `d.PropOptions`, `Curry2`, `NodeRequire`, `ShallowRef`, `WorkflowNode`, `PackageMetadata`, `LoopTemplate`, `Benchee.Options`, `Grid`, `IQuestionnaire`, `IFluidDataStoreChannel`, `Clause`, `MatchmakerAdd_NumericPropertiesEntry`, `StatCalculated`, `SwPush`, `UIWindow`, `SearchInWorkspaceOptions`, `Sprite3D`, `HID`, `WorkUnit`, `RouterUrlState`, `Bill`, `fse.Stats`, `KernelMessage.IExecuteRequest`, `IPossibleParameterFile`, `AssociationCCAPI`, `CustomEventInit`, `PointAndCrossingsList`, `ListWebACLsCommandInput`, `CSSScalar`, `RootTestSuite`, `GfxTextureP_WebGPU`, `SphereCollisionShape`, `IRenderMimeRegistry`, `ProxyTarget`, `AttributeInput`, `IBindingSetting`, `ObservableObjectAdministration`, `MIRFieldDecl`, `Attach`, `SnapshotFragment`, `AutoScalingGroup`, `AstIdGetter`, `OperationNode`, `JWTSignOptions`, `UnionTypeProvider`, `AuthClient`, `t.TSType`, `PropOptions`, `RoomState`, `TraversalStrategy`, `Coding`, `ExpNum`, `IMessageParser`, `JSNode`, `NodeTransform`, `IndexableNativeElement`, `ProposedPosition`, `ReprOptions`, `StreamedData`, `SelectorAstNode`, `AuthMachineContext`, `IHttpRequest`, `DataTypeContext`, `MutationFunctionOptions`, `TransitionConfig`, `ZRenderType`, `ExecuteResultLine`, `ToastInput`, `Net.Socket`, `IssuePublicationIdentifier`, `Producer`, `ServiceDependency`, `MessageCreateOptions`, `OpenSearchConfig`, `LoginProps`, `PbEditorElement`, `EditId`, `Job`, `GanttItem`, `displayCtrl.ICtrl`, `RichText`, `ConcreteTestSettings`, `PluginItem`, `MerkleTreeInclusionProof`, `VoidExpression`, `NearestIntersection`, `MatchingDirection`, `Broadcaster`, `IPrintableApplication`, `DeepLinkConfig`, `Tracer`, `SortKey`, `SwitchProps`, `FileSystemAccess`, `ast.NodeType`, `TagsListItem`, `OpenAPI.Parameter`, `Hand`, `ExtractorChecker`, `MockControllerAdapter`, `MovementState`, `Delivery`, `ESMap`, `PointerUpdateTrigger`, `GeometryRenderer`, `Fig.Subcommand`, `Inbound`, `SimpleTreeDataProviderHierarchy`, `Applicative3`, `SFSchema`, `CollectionWithId`, `LoginRequest`, `TextureData`, `TxsTopicData`, `DateRangeValues`, `TranslatableService`, `VertexPlaceholder`, `FunnelStep`, `ComponentRegistrant`, `GroupFrame`, `PagingMeta`, `OnPostAuthToolkit`, `GeometryStateStyle`, `INodeInputSlot`, `CdtEdge`, `Emoji`, `ConeTwistConstraint`, `RelationField`, `MarkerNode`, `SwipeGestureEventData`, `MatchScreenshotOptions`, `DescribeAssetCommandInput`, `ImageProps`, `ItemProps`, `MatchDSL`, `ProcessOutput`, `MarkdownEngineConfig`, `Bonus`, `StorageItem`, `IGraphQlSchemaContext`, `ImageSourceType`, `MetaRewritePolicy`, `ICollectionOptions`, `StyledLinkProps`, `ControllerMethods`, `TestClientLogger`, `FlexibleConnectedPositionStrategy`, `ServiceItem`, `d.EventOptions`, `BreadcrumbLinkProps`, `EditablePolyline`, `SenderDocument`, `Contribution`, `AppRoot`, `IPropertyOption`, `CoordinateExtent`, `IAccountDetails`, `CollisionKeeperCategory`, `PackageModuleRef`, `DaffCategoryFilterRangeNumericRequest`, `CombinedPredicate`, `Translate`, `ClientWrapper`, `ResourceList`, `SrtpContext`, `SpecificEventListener`, `CsvWriter`, `ConfigOption`, `ProjectItemImpl`, `ThemeInfo`, `Locations`, `CppBytes`, `IBehavior`, `logging.Log`, `CommerceLayerClient`, `TodosState`, `ListenerRemoveCallback`, `TImportOptions`, `IComponentEvent`, `CustomFontEmbedder`, `FieldQueryBase`, `DaffCategoryFilterRangeNumericPairFactory`, `OptionPureElement`, `EidasResponseAttributes`, `MlLicense`, `WebResponse`, `WindowProps`, `CommentController`, `EmusakEmulatorConfig`, `FeeStructure`, `ts.NumericLiteral`, `EdmxReturnType`, `EitherNodeParams`, `DroppableStateSnapshot`, `MockCallAgent`, `EthereumPaymentsUtils`, `Traversal`, `d3Geo.GeoProjection`, `ICodeBuilder`, `StylableFile`, `MockSerialPort`, `HippodromeEditOptions`, `ExecutionContextContainer`, `DaffExternallyResolvableUrl`, `PartyClose`, `UploadLayerPartCommandInput`, `FormattingOptions`, `UpdateFunctionCommandInput`, `TorusPipe`, `MediasoupPeer`, `BackupService`, `InputComponents`, `Epsg`, `SymbolAndExponent`, `ProDOSVolume`, `PolyIDAndShares`, `PullBlock`, `PluginResult`, `AutonomousDatabaseKeyHistoryEntry`, `DiscoverStartPlugins`, `UserTask`, `DashboardViewportProps`, `EnumMember`, `mixedInstance`, `ListPoliciesRequest`, `HighRollerAction`, `SubscriptionList`, `EmptyAction`, `LinkState`, `MCommentOwnerVideo`, `ProjectRepository`, `DrawOptions`, `BuildableTree`, `PersistentState`, `Nodelist`, `BodyTempState`, `HttpRequestWithGreedyLabelInPathCommandInput`, `ExtractorData`, `ItemField`, `CreateDatasetRequest`, `PolygonBoxObject`, `EmitEvent`, `TestDatum`, `PReLU`, `AccountBalance`, `GetMembersCommandInput`, `requests.ListCloudVmClusterUpdatesRequest`, `ProviderConfiguration`, `KBarState`, `vType`, `Groups`, `RRI`, `ResponsiveValue`, `Pairing`, `IMod`, `CalendarWrapper`, `RateLimitArguments`, `SlashingProtection`, `NodeDisplay`, `MpUIConfig`, `boolean`, `FunctionStats`, `DimensionGroup3D`, `Redlock`, `RenderBuff`, `PitchName`, `HTMLOptionElement`, `SyncNotifyModel`, `Ng2StateDeclaration`, `TransactionPool`, `UserError`, `YEvent`, `PageProps`, `MockElement`, `HashLockTransferAppState`, `OrientedBounds`, `AnyData`, `BidirectionalLayerArgs`, `MomentValidator`, `ObjectTypeKind`, `IResources`, `MDCMultilineTextField`, `IDataMessage`, `PinLike`, `RuleTypeModel`, `ServerResponseService`, `ISeedPhraseFormat`, `RuleSpec`, `IExecutionResult`, `DataBeforeRequestOptions`, `CollectorSet`, `NativeDeleteOptions`, `TempDirectory`, `CreateAttendeeCommandInput`, `Edges`, `TabModel`, `DescribeEndpointCommandInput`, `ProjectToApiAnalysis`, `SchemaQuery`, `DiscussionDocument`, `PaginationCallback`, `Pipeline`, `SceneControllerConfigurationCCReport`, `DataTypeFactory`, `VnodeDOM`, `CipherRequest`, `SceneColorTheme`, `SeriesTypePlotOptions`, `TestDecorator`, `UIModeType`, `ScopedLogging`, `PartialEmotionCanvasTheme`, `ActivationIdentifier`, `BlockPointer`, `BodyPartDefinition`, `ViewConfig`, `QueryBidsRequest`, `GfxRenderPipelineP_GL`, `GeometryObject`, `ISavedSearch`, `tfc.NamedTensorMap`, `RecordOfType`, `ExceptNode`, `MatGridTile`, `TimeUnit`, `WsBreadcrumbsService`, `VscodeSetting`, `CodeEditor.IPosition`, `RouteResponse`, `DaffCartPaymentMethod`, `OauthRequest`, `FSEntry`, `StepExtended`, `BastionHost`, `IUserData`, `RouteService`, `SalesInvoice`, `RefSet`, `Station`, `ClientPayload`, `ItemKeyboardNavigator`, `AstNodeWithLanguage`, `MActor`, `IAuth`, `ITestCase`, `WpResourceConfig`, `CogStacJob`, `BufferLike`, `VariationInfo`, `ObOrPromiseResult`, `WebSiteManagementModels.User`, `MockRequestInfo`, `Hapi.Server`, `GetQueryResultsCommandInput`, `CredentialsService`, `EndpointWithHostLabelOperationCommandInput`, `ImportAsNode`, `AuthenticateSteamRequest`, `DedicatedHost`, `ResolvedModuleWithFailedLookupLocations`, `HTMLPropsWithRefCallback`, `RouterConfiguration`, `OperationError`, `ConnectionResult`, `ParsedInterval`, `SavedObjectOpenSearchDashboardsServices`, `css.Node`, `Colors`, `messages.Location`, `RawResult`, `Oazapfts.RequestOpts`, `ImageHandlerEvent`, `AbiOutput`, `ResolveStylesOptions`, `RulesObject`, `CreateRepositoryResponse`, `PN`, `ChannelContract`, `Tracing`, `ServerMessage`, `DenoExtensionContext`, `LinkedHashSet`, `AlertInput`, `IAudioStreamNode`, `RNG`, `DragTarget`, `MicrosoftStorSimpleManagersResources`, `ResponseCallback`, `PrimaryKey`, `PatternMappingKeyEntryNode`, `RefreshTokenService`, `NgActionBar`, `BuildifierFileType`, `SlotOp`, `requests.ListMetastoresRequest`, `ArrayValues`, `TrackBuilder`, `PaginationDTO`, `SystemService`, `DID`, `IClassicmenuRuleSpec`, `MDSPostgresClient`, `ConnectArgs`, `ListUsersResponse`, `ConvLSTM2DCell`, `VaryMap`, `MdcCheckbox`, `RouteMethod`, `NpmInfo`, `TraceConstraint`, `SettingsDataUpdate`, `NotifyMessageDetailsType`, `KonvaEventObject`, `BlockchainTreeItem`, `android.graphics.drawable.Drawable`, `BinaryBitmap`, `FieldJSON`, `DeleteDomainCommand`, `ScalarTypeSpec`, `InfoActor`, `ExtraArgs`, `FileSystemUpdater`, `ESRuleConfig`, `UmiPluginNProgressConfig`, `Found`, `VcalVeventComponent`, `TestingFacade`, `AnimeListStatusFields`, `FullCalendar.ViewObject`, `kuberesources.ResourceKind`, `LookupItem`, `SubnetGroup`, `SentryScopeAdapter`, `PermuteLayerArgs`, `DeploymentConfig`, `DataMessage`, `NoteSnippetEditorConfig`, `BindingState`, `BaseClient`, `FileSystemEntries`, `PutConfigurationSetTrackingOptionsCommandInput`, `ThemeTool`, `SubscribeActions`, `TypeOrTypeArray`, `BasicCalculator`, `ImportStatements`, `ImageryMapExtent`, `CardManager`, `EventListenerCallback`, `FileHolder`, `ISequencedDocumentAugmentedMessage`, `PluginOrPackage`, `SerializedVis`, `UniqueOptions`, `IAttachMessage`, `convict.Schema`, `TimeQueryData`, `ListChannelModeratorsCommandInput`, `AuthenticationTemplate`, `StatefulChatClientArgs`, `VtexHttpClient`, `Http3FrameParser`, `CountryState`, `NzConfigKey`, `EntryObj`, `InnerAudioContext`, `SslConfig`, `IPlayerState`, `SCNMaterial`, `LobbyMember`, `ServiceProto`, `BackupShortTermRetentionPolicy`, `Sizes`, `requests.ListRoutingPoliciesRequest`, `Fzf`, `RemoveTagsFromResourceMessage`, `ListItemProps`, `Visitor`, `DataGridRow`, `CardProps`, `ListModelDeploymentShapesRequest`, `OmvFeatureFilterDescriptionBuilder.FeatureOption`, `SourceComponent`, `PublicEndpointDetails`, `AbsolutePosition`, `CheckPrivilegesResponse`, `LinterOffense`, `AutoforwardConfig`, `JoinTree`, `CHAINS`, `IRouter`, `CircularList`, `FnModules`, `IControl`, `ContactPayload`, `OpticsDomain`, `CustomWindow`, `ChangeDatabaseParameterDetails`, `RecordEdge`, `ModalType`, `PersistConfig`, `ResponderType`, `ExtractOptions`, `CsmSlotEntity`, `SMTFunctionUninterpreted`, `GeneralEventListener`, `ConchQuaternion`, `IssuanceAttestationsModel`, `CloudAccounts`, `FileListProps`, `PutBucketTaggingCommandInput`, `Vpc`, `IZoweJobTreeNode`, `EmployeeRecurringExpenseService`, `WaveformRegion`, `VerticalTextAlignment`, `Ident`, `DaffCategoryPageMetadata`, `PrimitiveField`, `PgType`, `RunnerGroup`, `ProofStatus`, `WechatTypes.SendMessageOptions`, `DeleteKeyPairCommandInput`, `Assembler`, `SPClientTemplates.RenderContext`, `Formatter`, `IndexProps`, `SentryEvent`, `StreamZip`, `TimeLog`, `SqlFile`, `Notifications`, `RunnerInfo`, `MongoMemoryServer`, `ConditionalTransactionCommitmentJSON`, `IMainMenu`, `RTCRtpCodecParameters`, `glob.Options`, `BSPFile`, `Iprops`, `HandlerResourceData`, `monaco.editor.IMarkerData`, `CliConfig`, `IKeyValuePair`, `FirmwareWriterPhaseListener`, `Glossary`, `SocketUser`, `IOpenSearchDashboardsSearchRequest`, `FetchFn`, `RootCID`, `WordGroup`, `ASTParserTree`, `TokenStreamRewriter`, `DeferredAction`, `ResourceDayGridWrapper`, `CreateCollectionOptions`, `PlanetApplicationService`, `TeamsActionConnector`, `PubKeyEncoding`, `NodeBuilderContext`, `ModuleKind`, `DataViewCategoryColumn`, `InstantRun`, `AEADCipher`, `NgxsDataStoragePlugin`, `EthereumAddress`, `ConditionalType`, `TrackDetails`, `CreateCustomVerificationEmailTemplateCommandInput`, `EmployeeLevelService`, `MovieService`, `PlannerConfigurationScope`, `T3`, `Weapon`, `WriteTournamentRecordRequest_TournamentRecordWrite`, `Login`, `MediaElementAudioSourceNode`, `CMDL`, `MagickGeometry`, `ResourceDifference`, `GDevice`, `ExploreBundleResult`, `ClassMember`, `InteractionModel`, `ThunkCreator`, `Json2Ts`, `DebouncedFunc`, `HandleProps`, `ts.BlockLike`, `RequestUser`, `UseSubscriptionReturn`, `CueSet`, `ProviderState`, `ComponentRegistry`, `JsxOpeningFragment`, `MockContext`, `WyvernAsset`, `InlineResolved`, `ng.ILogService`, `StationModel`, `DaffConfigurableProductVariant`, `DataViewHierarchyNode`, `AnimationDesc`, `RefedMixin`, `PerspectiveGetResult`, `PersistencyBlockModel`, `Cidr32Block`, `ServerException`, `BackgroundBlurOptions`, `IMatrixProducer`, `Events.pointerdragend`, `SetOverlap`, `TraceServiceSummary`, `ExecOutput`, `IVisualizerVertex`, `ResponseHandler`, `Accidental`, `RetryStrategy`, `AndroidSplashResourceConfig`, `ShrinkStrategyMock`, `CategoryCollection`, `Mailbox`, `FindUsersResult`, `FuncArg`, `CityRouteProps`, `HookEffects`, `PaginationModelItem`, `InitialAlert`, `StructureValue`, `BaseContext`, `ConfigurationModule`, `StacksOperationOutput`, `tcl.Tag`, `ModuleResolver`, `BroadlinkAPI`, `AppStateStatus`, `Vout`, `SlashingProtectionAttestation`, `ICodeGenerationStackOutput`, `RegistryConfig`, `ElementInfo`, `GeoJSONGeometry`, `LinuxDistribution`, `Caching`, `pxtc.BlocksInfo`, `CompiledProxyRule`, `VoyagerSubscriptionContextProvider`, `BinaryOp`, `ir.Type`, `ModuleRpcCommon.EncodedContext`, `JobIdOption`, `SceneContext`, `NodeJSKernelBackend`, `StatefulChatClient`, `ModelPredictConfig`, `DeletePolicyCommandInput`, `Entries`, `CppParseTree`, `TFSavedModel`, `GfxTextureDescriptor`, `YCommandInput`, `IStatusWarning`, `ClusterExplorerResourceNode`, `d.CompilerWorkerContext`, `ComponentWithProps`, `StyledCharacterStrategy`, `AssignOptions`, `ethereum.PartialTransaction`, `WritableData`, `IMdcSelectElement`, `EventSource`, `RailRider`, `DataTypeConfig`, `IFrameAttachment`, `PlannerPage`, `CommandRunner`, `VariableData`, `DocumentFragment`, `BlogState`, `Champion`, `TextContentBuilder`, `RouteArgs`, `KVStorageBackend`, `RunnerOptions`, `SupClient.ProjectClient`, `FetchTicketsActions`, `ReleaseActionProps`, `ParserOptionsArgs`, `Lobby`, `IPermissionReturnType`, `TestKmsKeyring`, `AirnodeRrp`, `ExpressionParams`, `CognitoMetricChange`, `ExecutionProbe`, `protos.google.iam.v1.ITestIamPermissionsRequest`, `ProductEntity`, `T.ComponentMap`, `InputResolution`, `ScreenshotBuildResults`, `DecodedSignaturePart`, `XScaleType`, `d.ErrorHandler`, `ConnectedUser`, `TwingCompiler`, `InfiniteLine`, `requests.ListTagsRequest`, `OrderGraph`, `TestAccountProvider`, `FieldValuePair`, `BodyImplementation`, `LanguageModes`, `ParameterListContext`, `DatabaseConfiguration`, `UX`, `AnyArena`, `TAggConfig`, `d.RollupConfig`, `PersonaIdentifier`, `HashType`, `Presence`, `IDataSourceDictionary`, `SerializedMessage`, `LabelService`, `Datatable`, `AngularPackageLoggerMessageType`, `CommandLineOptionOfCustomType`, `ListImagesRequest`, `ModelRenderContext`, `DeclinationDictionary`, `NativeInsertUpdateManyOptions`, `ZoneDelegate`, `d.JsonDocs`, `IResultSelection`, `SpawnClose`, `BindingItem`, `Logs`, `chrome.tabs.Tab`, `XRViewport`, `CompositeStrings`, `WriteFunc`, `THREE.Path`, `GlobPattern`, `ForeignKey`, `OpusRtpPayload`, `d.JsonDocsDependencyGraph`, `T.Model`, `ChatState`, `GetDevicePositionHistoryCommandInput`, `MessageQueue`, `Plane3dByOriginAndUnitNormal`, `TestSystemContractsType`, `ListEvents`, `IResource`, `O1`, `ResponseValue`, `TransferCommitment`, `SignIn`, `HasPos`, `BoolShape`, `InnerPlugin`, `PlaneGeometry`, `Embeddable`, `RequestConfigT`, `PermissionService`, `KickGroupUsersRequest`, `Renderers`, `PersistedEvent`, `MarkerOptions`, `ForAllSuchThatInput`, `IKactusFile`, `ContextInternal`, `RepositoryManager`, `pulumi.Input`, `PreferenceChangeEvent`, `WeaveNode`, `PointProps`, `JobExecutionState`, `MagicExtensionWarning`, `TimeSlotService`, `requests.ListGiVersionsRequest`, `ThemeType`, `NotificationOptions`, `NFT721V1`, `CastEvent`, `SkiplistNode`, `TransactionMetadata`, `Rigidbody3D`, `IOsdUrlControls`, `FunctionReturnTypeCallback`, `ISearchGeneric`, `ifm.IRequestInfo`, `PeerType`, `datePickerModule.DatePicker`, `TileCorners`, `Domain`, `HintID`, `Cmd`, `UpdateAliasCommandInput`, `IdentityData`, `DebugPluginConfiguration`, `ServerSideVerificationOptions`, `Uint64Array`, `NodePolyfillsOptions`, `PageEditContextInterface`, `SVError`, `QueryParameters`, `GeoShape`, `ChatServer`, `BoundExistsFn`, `GlitzServer`, `FirstDataRenderedEvent`, `XBus`, `WorldCountry`, `ExpressRouteCircuitConnection`, `HTMLLinkElement`, `IGovernancePowerDelegationToken`, `ObjectCsvStringifier`, `KeyboardState`, `CoreImagesContract`, `FakeCommand`, `CSSOpts`, `Globber`, `PropertyName`, `VehicleState`, `EditorPlugin`, `ResolvedElement`, `ErrorWithMetadata`, `SVGForeignObjectElement`, `PropertyType`, `browsing.FilesView`, `ThemeUIStyleObject`, `PathComponent`, `PluginPass`, `DQCCacheData`, `MetricSeriesFragment`, `HookRegistry`, `AFileParser`, `GetAuthorizerCommandInput`, `ComponentList`, `CodeCell`, `CdsAlert`, `GenericOperationDefinition`, `SignedStopLimitOrder`, `IReadOnlyFunctionParameterCollection`, `SizeProps`, `ActionDefinitionByType`, `BcryptAdapter`, `Sym`, `S2`, `FirewallPolicyRuleCollectionGroup`, `TransferDirection`, `PartialRequired`, `LocalRegistry`, `TestUnitsProvider`, `WithString`, `InflectorRule`, `TranslateParams`, `MockService`, `UploadObservable`, `AxisSpace`, `AddConfigDeprecation`, `FetchStore`, `ChangeDetectorRef`, `DeletePortalCommandInput`, `JPADynamicsBlock`, `ReducerNode`, `NavigatorAxis`, `Conv2DConfig`, `SourceNode`, `NavigationNode`, `GPUAdapter`, `DiagnosticMessageChain`, `Overmind`, `DatabaseReference`, `GDITrack`, `GfxResource`, `ResolverInfo`, `ScaleContinuousNumeric`, `StudioBase`, `AssetMarketPrice`, `DeepMocked`, `BreadcrumbService`, `AggregationType`, `ScheduleActions`, `ResizeObserverCallback`, `ContestModel`, `Desc`, `MockedKeys`, `ScanSegmentVectorItem`, `ListPlaceIndexesCommandInput`, `Sequelize.Sequelize`, `VisualizationLinkParams`, `EmotesProvider`, `ModuleLinks`, `NumberPattern`, `ContainerProps`, `InternalCredentialManager`, `KeySchemaElement`, `SerializedRootCID`, `NameExpression`, `KeysToKeysToAnyValue`, `AggsSetupDependencies`, `TInjectTokenProvider`, `ThyTableColumn`, `TestHelper`, `GenericThemeShape`, `MockComm`, `NoteSnippetContent`, `uibPagination`, `Typehole`, `RibbonComponent`, `Timestamp`, `TranslationProject`, `ScrollViewProps`, `FullQuestionWithId`, `RulePathEntry`, `ProcessingEvent`, `FunctionResult`, `NetworkIndicator`, `NullableDateLimit`, `RequestProps`, `MeshStandardMaterial`, `ChannelStateWithSupported`, `Previews`, `MappingTreeObject`, `Depth`, `ConnectionSettings`, `LoggingService`, `ConfigValueFormat`, `SeriesBarColorer`, `webpack.LoaderContext`, `FirebaseAuthState`, `CompilerJsDoc`, `TsDocumentService`, `InstanceFailoverGroup`, `WebhookRequestData`, `RecipientCounts`, `ESLCarousel`, `CharacterStatsCalculator`, `RendererPlugin`, `UIEdgeInsets`, `IPolygonPoint`, `SimpleDeep`, `AxisData`, `ConsoleColor`, `PrinterService`, `DecomposedJwt`, `OcsConnection`, `MapDispatch`, `ProjectStatsChartDataItem`, `SemanticsAction`, `BrowserFeature`, `DiceRoller`, `HttpStatusCode`, `JiraIntegrationService`, `VaultItem`, `NzResizeEvent`, `THREE.Shader`, `CategoryThread`, `IDom`, `BluetoothCharacteristicUUID`, `TxData`, `NormalizedPapiParameters`, `TooltipOptions`, `UpdateDomainNameCommandInput`, `UserPasswordEntity`, `UseQueryPrepareHelpers`, `AttachedProperty`, `SqrlStatementSlot`, `ModbusEndianness`, `PerpV2BaseToken`, `IGroupCellRenderer`, `InvalidInputException`, `C7`, `Watchman`, `IHeaderExtensionObject`, `PathResolver`, `DynamoDB.BatchGetItemOutput`, `IParticle`, `CloudFrontWebDistribution`, `ScrollRect`, `IMeta`, `IHTMLInjection`, `Destination`, `ISavedObjectsRepository`, `UploadChangeParam`, `VpcSubnetType`, `Events.deactivate`, `ScopedHistory`, `NativeBookmarks.BookmarkTreeNode`, `UpdateRequest`, `ResourceCollection`, `SavedObjectEmbeddableInput`, `ViewPortComponent`, `PutPolicyCommandInput`, `RSTPreviewManager`, `ListConfigurationsRequest`, `WalletObjective`, `ConfigurationSectionEntry`, `ClientException`, `AlertCluster`, `SelectionManager`, `ResizeHandler`, `ForbiddenException`, `ParsedRequest`, `BehaviorTreeStatus`, `SectionMarkerConfig`, `GalleryState`, `AppInstanceJson`, `EditprofileState`, `ImageFormatTypes`, `ICacheEntry`, `HTTPHotspotObject`, `CrochetModule`, `Tsoa.Parameter`, `AWS.ELBv2`, `VariantMatchedResult`, `Compiler`, `IterationTypesResolver`, `py.AST`, `Area2DSW`, `AdInfo`, `CopyOptions`, `StorageLayout`, `LinkedAccountsService`, `WarningLevel`, `CodeGenOptions`, `SpawnHandle`, `NotebookModel`, `QueryStringFilter`, `TaskRoutine`, `NumberSymbols`, `ProposalService`, `HandlerDecorator`, `FilesystemEntry`, `ExtendedObject3D`, `ImGui.Vec4`, `ReducerAction`, `RouterService`, `FluentRules`, `ModuleDependencies`, `PropAliases`, `TableOfContentItem`, `FunctionImportParameters`, `TableConstraint`, `DestinationFetchOptions`, `ScannedClass`, `ParjserBase`, `TooltipOffset`, `ButtonStyles`, `oai3.Model`, `ParameterValues`, `ts.IntersectionTypeNode`, `Ray2d`, `IdentGenerator`, `Percent`, `SModelRoot`, `ContactCardEmbeddable`, `TriangleOrientation`, `RV`, `NodeCore`, `ModalComponentType`, `PartialTypeGuard`, `IAvailabilitySlotsCreateInput`, `HeaderActionIconProps`, `PathElement`, `DeclarationKind`, `EventActionCallable`, `TreeNodeIndex`, `ListNotificationsCommandInput`, `UntagResourceResult`, `NumbersImpl`, `NodeJS.Signals`, `TexImageSource`, `ListResponseModel`, `BIP32Interface`, `TSource`, `IImageAsset`, `System`, `ChannelChainInfo`, `ast.NodeTag`, `A11yConfig`, `AppContext`, `TimeseriesDataRecord`, `AuthHttp`, `TooltipProps`, `SKColor`, `ActualTextMarker`, `EventRegistry`, `MultipleFieldsValidator`, `GetGlobalObjectOptions`, `FluidObject`, `IUsedState`, `gang`, `IImperativeError`, `FetchData`, `FusedTeamMemberType`, `HeaterState`, `ITypedEdge`, `NumberSchema`, `iNotification`, `jqXHR`, `APIGatewayProxyHandler`, `RegisteredServiceAccessStrategy`, `WaveFile`, `E.Either`, `StagePanelManagerProps`, `zod.infer`, `restify.Server`, `TreeItem`, `ThroughStream`, `ICounter`, `Events.pointermove`, `TargetDiezComponent`, `ConchVector4`, `IMutableFlatGridItem`, `Conv3DTranspose`, `angular.resource.IResourceClass`, `SyncOpts`, `EventUiHash`, `FakeProviderConnection`, `NextComponentType`, `IUserState`, `ReactionCanHandleOptions`, `Re`, `WalletStateType`, `GXMaterial`, `Verification`, `EdgeCalculator`, `Exhibition`, `NetworkService`, `AddressFormat`, `ExecutableItemWrapper`, `ChildNodeType`, `NewObjectOptions`, `ExtensionModule`, `NodeKeyJSON`, `TsSelectedFile`, `IPass`, `SassError`, `CodeModExports`, `TestStatus`, `NavMenu`, `TablePipeline`, `EnrollmentAPIKey`, `DeleteAliasCommandInput`, `QuerySubmitContext`, `NewBalanceFn`, `SubCategory`, `DeviceCreateParams`, `IRegularAttr`, `OnePageDataInternal`, `MultiSigSpendingConditionOpts`, `ESFixedInterval`, `SetStateCommitment`, `V2SubgraphPool`, `IDataFilterValueInfo`, `Redis.RedisOptions`, `ManifestApplication`, `PathTree`, `MediaObserver`, `lf.schema.Column`, `BlobDownloadResponseParsed`, `AuthCredential`, `Slack.Message`, `From`, `PolicyProxyHookOptions`, `SingleVertexInputLayout`, `FirebaseFirestore.CollectionReference`, `EffectiveTypeResult`, `CmsIndexEntry`, `StatefulChatClientOptions`, `ActionMetadata`, `MuteRoomTrackResponse`, `GroupService`, `V1Secret`, `ColumnState`, `TokenStore`, `StopItem`, `CreateSiteCommandInput`, `GoalKPI`, `Mission`, `SharedFunctionStub`, `AddressListItem`, `AndroidProject`, `IConstructor`, `FTP`, `GlobalNames`, `PartitionSmallMultiplesModel`, `TextInputLayout`, `IAppError`, `IGenerateReleaseNotesOptions`, `StdioOptions`, `ChannelEffects`, `CdkDrag`, `TaskFunction`, `FieldArrayWithId`, `InfectableParticle`, `SerializedBoard`, `TipLengthCalibration`, `DiffParser`, `AttendanceService`, `SolflareWallet`, `LabelPropertyDataFilterer`, `DefaultEditorAggAddProps`, `ILayoutState`, `SceneParams`, `PIXI.Sprite`, `ElementSize`, `Section`, `ThemeProperty`, `sdk.SessionEventArgs`, `BleepsSetup`, `MainTab`, `TsInterfaceInfo`, `PromiseType`, `OriginConfig`, `ZebuLanguage`, `ISerializedInterval`, `HttpClientResponse`, `Bitstream`, `FaunaDBClient`, `VpcData`, `TimePickerProps`, `SlashCommandStringOption`, `GenericOneValue`, `GeneratorManifest`, `ResolutionHelper`, `TheRdsProxyStack`, `TypeInfo`, `MiddlewareAPI`, `PropertyValues`, `ExplorerExtender`, `PasswordService`, `URLMeaningfulParts`, `requests.ListIncidentsRequest`, `GeoPointInput`, `Security2CCNonceReport`, `DiagnosticRelatedInfo`, `BizResponse`, `StatusNotification.Status`, `VPC`, `InternalsImpl`, `GetRegexPatternSetCommandInput`, `IAnimatable`, `AbstractColumn`, `PackageTreeItem`, `THREE.DataTexture`, `UseCaseFunction`, `CrudFeatures`, `IPackageFile`, `GraphWorkspaceSavedObject`, `IInboundSignalMessage`, `DraftArray`, `Compose`, `Y.XmlText`, `MapOfType`, `Pose`, `BitbucketAuthResponse`, `AxisOrientation`, `RecoilState`, `WrapperOptions`, `ServiceRepresentation`, `BulletViewModel`, `Cycle`, `MenuMapEntry`, `ISetBreakpointsArgs`, `ITrackWFeatures`, `Statement`, `IBatteryCollectionItem`, `IKLink`, `Mountable`, `GetSpaceParams`, `AzureDevOpsOpts`, `IPaginationProps`, `ToneConstantSource`, `CarsService`, `GraphQLContext`, `CompilerEventBuildLog`, `PostInfo`, `AppCheck`, `TestBufferLine`, `UiSchema`, `Peak`, `U2NetPortraitConfig`, `ParameterValue`, `IArticleAction`, `IContentSearchOptions`, `MergeFsResult`, `GeneratedPoint2D`, `IWebsocketMessage`, `ActionHistory`, `PageType`, `WebDNNCPUContext`, `CompilerStyleDoc`, `CustomerLayoutState`, `AddressLabel`, `OnMessageFlags`, `RawSkill`, `AbridgedFormatErrorMetadata`, `TestFunctionImportComplexReturnTypeParameters`, `FiltersActions`, `NamedTypeNodeDef`, `TooltipData`, `LeaseOperationResponse`, `OutputNode`, `EventInstance`, `LaunchConfig`, `DataTableRow`, `UserAction`, `MessagingDeviceResult`, `LegacyDrawing.Animation`, `PathContext`, `IConnextClient`, `EmbeddablePackageState`, `AppSettingsFormValues`, `B4`, `IAddGroupUsersOptions`, `QueueType`, `angular.ICompileService`, `ComputedOptions`, `DescribeCertificateCommandInput`, `SetOperations`, `CommonProps`, `PriorityQueue`, `SessionModel`, `SetupResult`, `ODataModel`, `MaterialGroup`, `ID3Selection`, `NPC`, `SaveResult`, `AssemblyExpressionContext`, `SerialPort`, `ContactDocument`, `Names`, `PinReference`, `firebase.firestore.QueryDocumentSnapshot`, `LSTMLayerArgs`, `Mmenu`, `ScrollBar`, `FragmentDefinition`, `HyperionWorkerDef`, `GatewayTreeItem`, `ScaleConfigs`, `AppMessage`, `StateModel`, `StdSignDoc`, `TrackItem`, `UserRef`, `IAuthenticatedHubSearchOptions`, `ComponentContext`, `MutableVector4d`, `DefaultTallyConfiguration`, `BodyType`, `IUsersRepository`, `Royalty`, `ElfSectionHeader`, `moment.Duration`, `InteractionForegroundService`, `DisplayNameOptions`, `BaseAction`, `CurveExtendMode`, `OutChatPacket`, `AnimationReference`, `FiatCurrency`, `PerformWriteArgs`, `InteractionState`, `TimeSection`, `SnapDB`, `IpPermission`, `BasicProps`, `MultiWord`, `tStringCurrencyUnits`, `GroupProps`, `ValidationState`, `Conv1D`, `SocketAddress`, `App.SetupModule`, `__SerdeContext`, `DateRange`, `requests.ListExternalDatabaseConnectorsRequest`, `IMatcherFunction`, `LabelDoc`, `monaco.editor.IModelDeltaDecoration`, `IDisposer`, `TeamMembershipProps`, `SMTFunction`, `D3Service`, `UpdateCustomVerificationEmailTemplateCommandInput`, `CertificateAuthorityLifecycleState`, `MockMember`, `IStateMachine`, `DeleteCampaignCommandInput`, `FaunaRef`, `SnapshotQuotaExceededFault`, `TocItem`, `FlowDocument`, `RefsDetails`, `tfc.io.ModelArtifacts`, `DescribeEngineDefaultParametersCommandInput`, `CSSVarFunction`, `NameValidationError`, `ProblemInfo`, `ModuleMetadata`, `NgForageOptions`, `IEmployee`, `ToastConfigCommon`, `CommandMap`, `IBundle`, `UnicodeBlock`, `PercentLengthType`, `StructuredStorageBaseHelperOptions`, `Counter1`, `QueryCreator`, `AnyEntity`, `StoryContext`, `USampler2DTerm`, `FileInfo`, `sdk.SpeechRecognizer`, `DetectorRecipeDetectorRule`, `TransformerFactory`, `Uint`, `CurrencyFractions`, `msRest.CompositeMapper`, `HookCallback`, `TKeyboardShortcutsMapReadOnly`, `CircleProps`, `FunctionDefinitionConfig`, `IconPosition`, `prng`, `TID`, `GalleryService`, `VdmActionImport`, `FlowPostFinally`, `CachedBuildRequestOptions`, `bitcoin.payments.Payment`, `ListFHIRExportJobsCommandInput`, `WebPhoneSession`, `CandidatePersonalQualitiesService`, `NavigateOptions`, `IPacketHeader`, `SearchProps`, `AbstractMaterialNode`, `coreClient.OperationSpec`, `GameEntityObject`, `SavedObjectsExportTransform`, `url.Url`, `RawMessage`, `FiniteIEnumerator`, `Contributors`, `Hierarchy`, `RnM2BufferView`, `DepthStyleProps`, `PanelActionParams`, `IEpisode`, `IPropertyListDescriptor`, `MergeItem`, `SizeResult`, `IAppointment`, `PageConfig`, `DeletePresetCommandInput`, `IIteratee`, `LazyExoticComponent`, `FMAT`, `SearchType`, `ITopic`, `BattleCommitment`, `fromUserActions.GetReviewersStatisticsCollection`, `Slide`, `ParsedFunctionJson`, `SlackMessageArgs`, `ImageSize`, `InterfaceName`, `DefaultTreeNode`, `PlayerType`, `NAVTestObject`, `BlueGreenManifests`, `MouseManager`, `VNodeProperties`, `ObservableVocabulary`, `PromiseWithProgress`, `PCLike`, `DeleteClusterResponse`, `interfaces.Newable`, `Models.LeaseAccessConditions`, `ZosAccessor`, `BufferState`, `NullableT`, `InteractionManager`, `sdk.ConversationTranslator`, `TypeDecorator`, `Lane4`, `IVector`, `StyleErrors`, `ListResult`, `ReleasesClient`, `ts.FunctionLikeDeclaration`, `ts.Type`, `RpcCallParameters`, `ModuleMock`, `messages.GherkinDocument`, `TransformMessage`, `LayerFromTo`, `HashHistoryManager`, `Category`, `VideoTileState`, `IFullItemState`, `BudgetSummary`, `BindingName`, `CustomSkill`, `Starter`, `MaybeProvider`, `Rope`, `TreeItemModel`, `PromiseDelegate`, `WriteConditionalHeadersValidator`, `ICommandBarItemProps`, `SignalListener`, `STMultiSort`, `ModuleName`, `IWarning`, `WaitContext`, `NotifyQueueStore`, `android.content.DialogInterface`, `CohortComposition`, `ts.Signature`, `ReadStream`, `JobExecution`, `ExecutionErrorProperties`, `TaskDefinition`, `PointSeriesColumn`, `TaskDraft`, `InvokeArgument`, `FSPath`, `Memoized`, `HsToastService`, `ClientGrpcProxy`, `TraceId`, `Fund`, `SignalingConn`, `GetInput`, `TranslationString`, `LoginComponent`, `Models.ModifiedAccessConditions`, `MergeEl`, `AlbumEntity`, `Chapter`, `ITagHandler`, `LogAnalyticsSourceMetadataField`, `RX.CommonProps`, `DataResolverOutputHook`, `Materialize.ChipDataObject`, `UserDeposit`, `Datastore.Transaction`, `LoadCallback`, `P2PEnhancedPeerInfo`, `EdgeImmut`, `AnySchemaObject`, `StackItem`, `ChainableTransform`, `IEventHandler`, `TilemapProject`, `SnapshotIn`, `NextService`, `StackInfo`, `Shorthand`, `AuctionViewItem`, `PossiblePromise`, `MdcSnackbarService`, `ACM`, `TabViewItem`, `ScriptDataService`, `CommandInput`, `ViewContainerPart`, `ReferenceNode`, `DelugeTorrent`, `KeyResultUpdate`, `Byte`, `ServiceContainerConfig`, `MerkleTree`, `HookEvent`, `Builders`, `GLintptr`, `d.HydrateDocumentOptions`, `Bridge`, `NotificationInfo`, `ValidationArguments`, `BindingInputBase`, `P2PRequestPacket`, `FlowAssignmentAlias`, `ts.ExpressionWithTypeArguments`, `DeleteRepositoryPolicyCommandInput`, `CalendarFieldsOptions`, `DaffCategoryFilter`, `DocumentUri`, `MatchInfo`, `ListenerFunction`, `MTD`, `Target`, `StyleManagerService`, `FeatureService`, `Agreement`, `ScaleTime`, `ConvexPolygon2d`, `FileOptions`, `OpenSearchClient`, `HMACKey`, `Learnset`, `TemplateParameter`, `EventBody`, `FormElement`, `MyDirectoryTree`, `Species`, `AudioModule`, `NetworkDiff`, `CliManipulator`, `RelativePlaceAnchor`, `StringifyOptions`, `LegacyOpenSearchError`, `ParserRuleContext`, `Arguments`, `RuleObject`, `RequestApprovalService`, `IDynoCollectionKeyValue`, `GitFileChange`, `HTMLElement`, `Stop`, `FormConfigProps`, `TypeScriptConfigurationBuilder`, `IMP4AudioSampleEntry`, `ServiceDownloadProvider`, `PlayerViewState`, `RiskViewEntry`, `InvalidParameterCombinationException`, `ThingView`, `PhysicsCollider`, `TooltipOperatorMenu`, `ReturnTypes`, `BridgeProtocol`, `SelectInfo`, `TinaFieldInner`, `CSSStyleRule`, `ColorInput`, `Utils`, `HeaderBag`, `ArraySchema`, `ChainInfoWithEmbed`, `AppStateTree`, `QueryResolvers.Resolvers`, `ErrorUtilitiesService`, `IHooks`, `SeparableConv2D`, `GraphQLEnumValueConfigMap`, `CanvasDepth`, `Locales`, `CacheAdapter`, `Coupon`, `CheckerResult`, `GunMsgCb`, `TestClass`, `BrowserNode`, `CustomRequest`, `ListJobsResponse`, `BrandModuleBase`, `DebugProtocol.Message`, `CodeQualityInformation`, `CreateOperation`, `RumPerformanceResourceTiming`, `WebGLCoreQuadOperation`, `ts.ImportSpecifier`, `TestChannel`, `CaseItem`, `BitMap`, `ReconnectingWebsocket`, `IReducers`, `PaperSource`, `SourceInformation`, `DeviceSelector`, `LiteloaderVersion`, `INorm`, `DateWidget`, `DialogData`, `IArmy`, `ContainerRuntime`, `GUITheme`, `ModbusConnection`, `UnidirectionalTransferAppAction`, `IProjectMetadata`, `IAccountDataStore`, `CLM.AppBase`, `TypographStyle`, `UserAgent`, `QueryDeploymentsRequest`, `PhysicalTextureType`, `SaveDialogReturnValue`, `vscode.DebugAdapterExecutable`, `IStep`, `SchemaDifference`, `DaffProductDriverResponse`, `DecodingTransformer`, `V1ExpressionModel`, `CompletionItemKind`, `FBSDKSharing`, `Angulartics2GoogleAnalytics`, `RectangleNode`, `NetworkInfo`, `DitherKernel`, `WeakGenerativeCache`, `ICached`, `QualityLevel`, `ProofRequest`, `NotificationsService`, `Def`, `Dispatched`, `WlDocs`, `Database.User`, `GasTokenValidator`, `ReadonlyVec2`, `GT`, `angular.IWindowService`, `ConnectionDataEnvelope`, `DirType`, `IDeploymentCenterPublishingContext`, `FieldElement`, `NonArpeggiate`, `ClassFacts`, `PermissionItem`, `OutputTargetDocsJson`, `SFCBlock`, `LongHeader`, `DatabasePool`, `IdentityIndex`, `IndexType`, `IPartitionLambdaConfig`, `ResourceNotFoundFault`, `CalendarType`, `IListFormResult`, `PointerOverEvent`, `MultiLineStringDataVariant`, `ContactEmail`, `NoInputAndOutputCommandInput`, `TaskExecution`, `ExtendedClient`, `PGTransform`, `UnknownError`, `ControlInterface`, `TimeOpStatementContext`, `SidebarMenuItem`, `NetworkRequestInfo`, `IntermediateToken`, `ResourceLimitExceededException`, `GenericComboBoxProps`, `Traversable2`, `PluginFunction`, `IConsoleResponse`, `I18nService`, `IArtTextCacheData`, `ModifyDBParameterGroupCommandInput`, `requests.UpdateConnectionRequest`, `IBaseTransaction`, `NetconfForm`, `FalsyPipe`, `InitStoreState`, `Matrix2D`, `Service$`, `ModuleBase`, `UserMembership`, `ConverterContext.Types`, `OutgoingSSNResetRequestParam`, `ConvertComponent`, `DateRangeValuesModel`, `DaffCartReducerState`, `ThemeGetter`, `AnimationEvent_2`, `R.Morphism`, `PipelineResult`, `SupRuntime.Player`, `ConversionResult`, `GetUserInfoSuccessCallbackResult`, `HttpContextConstructorContract`, `UpdateDataSourceCommandInput`, `PositionalArgument`, `MentionDefaultDataItem`, `QuakemlService`, `RGBAColor`, `GeoInput`, `DefaultAnchors`, `IFieldProps`, `RendererStyleFlags2`, `GetConnection`, `HTMLIonRadioElement`, `FormState`, `StoredTransaction`, `SearchOptionModel`, `LoggerInterface`, `CTransactionSegWit`, `FirmaWalletService`, `MockAdapter`, `MongoDB.Filter`, `BulkApplyResourceAction`, `MDCTabScrollerAdapter`, `ArtifactFilePaths`, `config.Data`, `Refinement`, `GroupedRequests`, `ListResourcesCommandInput`, `KeyModifierModel`, `Subst`, `TransformOptions`, `SpacesServiceStart`, `TransmartSubSelectionConstraint`, `CID`, `CurrentHub`, `Color4`, `AppNotification`, `IApiOperation`, `DeveloperExamplesPlugin`, `TransportEvent`, `FieldBase`, `ExtensionManager`, `ParsedArgs`, `BullBoardQueues`, `PlayerContext`, `SettingsContextProps`, `TxHash`, `ethers.Contract`, `guildInterface`, `SearchDetails`, `PartnersState`, `Cypress.ObjectLike`, `GetJobResponse`, `GX.TevScale`, `IElem`, `UnitHelper`, `LocalStorageAppenderConfiguration`, `DirResult`, `MeetingAdapterState`, `FleetStartServices`, `KeyedTemplate`, `B16`, `Heater`, `Breakpoint`, `QuestionDotToken`, `TiledObject`, `GlobalLogger`, `IMatches`, `DataViewHierarchy`, `Doctor`, `NEOONEProvider`, `Dungeon`, `TmdbMovieDetails`, `GToasterOptions`, `NgxGalleryOptions`, `StackHeaderInterpolationProps`, `LanguageOption`, `ResourceIdentifier`, `PutScalingPolicyCommandInput`, `DP`, `PromptType`, `turfHelpers.Feature`, `AlertingAuthorization`, `WorkerFunction`, `WizardStep`, `IWireMessage`, `ResourceReturn`, `MetadataError`, `Indexer`, `SpeechRecognitionResult`, `AsyncIterable`, `VideoListQueryDto`, `ThrottlingException`, `TFLiteWebModelRunner`, `IApplicationHealthStateFilter`, `ClientError`, `VoiceFocusSpec`, `ComponentHTTPClient`, `ALong`, `PreprocessingData`, `protocol.FileRequestArgs`, `TextEditorDecorationType`, `OpenChannelEvent`, `UrlConfig`, `HeadConfig`, `EventQueueItem`, `Int8`, `ItemT`, `FunctionObject`, `BentleyCloudRpcParams`, `CssToken`, `vscode.FormattingOptions`, `OpenEdgeConfig`, `PatchResult`, `messages.FeatureChild`, `AudioContext`, `StoreST`, `ARecord`, `DeleteVolumeCommandInput`, `KubeContext`, `ITimeOffPolicy`, `GaxiosOptions`, `BindingDirection`, `UserInterface`, `StorageInfo`, `ViewService`, `FSNode`, `StepBinding`, `DirectionConstant`, `DragactLayoutItem`, `DeleteUserRequest`, `SelectableValue`, `KeyValue`, `JsonDocsMethodReturn`, `DataSourceItemGroup`, `JwtKeyMapping`, `MouseEventInit`, `socketIo.Socket`, `EdgeConfig`, `CustomControlItem`, `ComboTree`, `Darknode`, `CppBuildTask`, `Diff`, `TradeContext`, `AuxPartition`, `EntitySchema`, `TemporalArgs`, `NzNotificationRef`, `AssetMap`, `Pet`, `HardwareConfiguration`, `NodeID`, `GlimmerAnalyzer`, `Security`, `ParentContexts`, `cc.Button`, `AlertExecutorOptions`, `IndexedPolyfaceVisitor`, `SqrlConstantSlot`, `ArrayBufferReference`, `requests.ListUserAssessmentsRequest`, `TwitchChatMock`, `requests.ListComputeCapacityReservationsRequest`, `Initialized`, `DueReturn`, `NetworkKeys`, `SpeechGenerator`, `MatchingFunc`, `d.ConfigFlags`, `Output`, `RuntimeTransaction`, `IntentSchema`, `Configurations`, `Timezone`, `TranspileResult`, `Edit`, `ClusterService`, `FileWatcherCallback`, `Nil`, `types.CodeError`, `WWAData`, `PAT0`, `XhrRequest`, `NavigatorGamepad`, `ISideEffectsPayload`, `GX.TexFormat`, `DropEvent`, `ESLToggleable`, `IRealtimeEdit`, `ConceptTypeDecl`, `Reply`, `ClickEvent`, `CoreTheme`, `MultilineTextLayout`, `OrderDetailService`, `Production`, `IAuthOptions`, `ILog`, `Bar`, `PairSet`, `NodeChildAssociationEntry`, `ItemEventData`, `CodeEditor.IEditor`, `IntegrationCalendar`, `PopupComponent`, `ContractDefinitionContext`, `LocationStrategy`, `TKeyArgs`, `SlotStatus`, `IdeaId`, `IsInstance`, `DBQuery`, `InputDataConfig`, `Config.Path`, `ContentNode`, `UIAnalytics`, `StringContext`, `ISpecialStory`, `AllocationUpdatedArg`, `DataTable.Column`, `IGitProgressInfo`, `CharSet`, `CursorPopupInfo`, `UnsupportedBrowsers`, `ConnectionDetails`, `IQuery`, `RustPanic`, `UtilService`, `PDFContentStream`, `MatIconRegistry`, `DownloadJob`, `JoinMode`, `AbstractContract`, `ITerminalContext`, `LifecycleRule`, `thrift.TList`, `BMDData`, `selectorParser.Node`, `RouteCache`, `GenericModel`, `CanvasLayerModel`, `JQueryStatic`, `Yoga.YogaNode`, `PdbStatusDetails`, `IChannelState`, `PathState`, `WorkspaceFolderContext`, `ModelConstructorInterface`, `SendPropValue`, `IConfirmedTransaction`, `AttrMap`, `PolylineProps`, `CompoundStatementListContext`, `JupyterLabPlugin`, `cpptools.Client`, `FakeShadowsocksServer`, `CodeChangedEvent`, `InternalOpExecutor`, `RPCPayload`, `IExportFormat`, `DescribeImageVersionCommandInput`, `TokenAddressMap`, `BookStoreService`, `ex.PreCollisionEvent`, `ParamsSpec`, `CreateDBClusterSnapshotCommandInput`, `RegisterX86`, `AreaNode`, `TaskCacheSession`, `OmvFilterDescription`, `AppResourcesModel`, `ApplicationGateway`, `DiagnosticSink`, `PutItemInput`, `MockDeviceManager`, `SharedArrayBuffer`, `BaseHistory`, `Aspects`, `Environment`, `TinaCloudCollectionEnriched`, `DomModule`, `MultiKeyStoreInfoWithSelectedElem`, `TransactionOrKnex`, `SocialSharing`, `Overlord`, `CreateDBClusterEndpointCommandInput`, `Screenshot`, `AxisConfig`, `UserMetadata`, `UIViewController`, `RemovePermissionCommandInput`, `SitemapXmpOpts`, `RDQuery`, `ExchangePriceService`, `IoLog`, `FinalizeHandler`, `WebSocketSubject`, `DropTargetSpec`, `Intl.NumberFormatPart`, `NzNoAnimationDirective`, `DropViewNode`, `CircleModel`, `OrganizationalUnitResource`, `GfxInputStateP_GL`, `BSPNode`, `BlockMarketCategory`, `AwsS3PutObjectOptions`, `XNA_Texture2D`, `ProtocolExecutionFlow`, `LuaDebug`, `VRMFirstPerson`, `OpenSearchDashboardsReactPlugin`, `TokenContext`, `IGenericObject`, `KeyConnectorService`, `VideoSource`, `TilePath`, `UserResponse`, `BuildingColorTheme`, `Multiplexer`, `JsonDiffNode`, `ConditionalTransferUnlockedEventData`, `ShaderProgram`, `NoteContent`, `Mat`, `BulletOption`, `ComponentDocument`, `NodeInjectorFactory`, `CalendarUnit`, `WorkerTestHarness`, `ModifierType`, `CounterDriver`, `XAudioBuffer`, `NavigationProvider`, `PlannedOrganizationalUnit`, `ValidationChain`, `YieldEveryOptions`, `PointerPosition`, `UploadInfo`, `ComponentInfo`, `IWorkflowExecuteAdditionalData`, `ContractPrincipalCV`, `ImportRules`, `types.FormatTransfer`, `ContractCallResults`, `FabricSmartContractDefinition`, `HdBitcoinCashPaymentsConfig`, `GnosisSafeContract`, `JPACVersion`, `DaffCartTotal`, `TransactionBase`, `PredicateModel`, `MilkdownPlugin`, `OptionsMap`, `CommunOptions`, `PropsFromRedux`, `ApiTypes.Feed.Like`, `AngularFireObject`, `PoolData`, `SendableMsgBody`, `MatchingRoute`, `DescribeReplicationInstancesCommandInput`, `App.Context`, `DeviceManagerState`, `ListTagsForResourceCommandOutput`, `RRect`, `IsolationStrategy`, `ParamDef`, `IDataFilterResult`, `Field_Group`, `ShareButtonsConfig`, `amqplib.Options.Publish`, `CategoryItem`, `ComponentServer`, `EditorEvent`, `DeleteUserCommand`, `JsonPath.ExpressionNode`, `MainProps`, `RpcClient`, `BackupDestinationDetails`, `NodeEventTypes`, `MissingError`, `CreateMigrationDetails`, `MinimalViewPortConfig`, `StudioModelData`, `ListProjectsCommandOutput`, `VariableDeclarator`, `SMTExp`, `ProofMateItem`, `NodeType`, `DataConvertType`, `ThyPopover`, `Hideable`, `CellProps`, `IServerModel`, `IPersistence`, `Neovim`, `TestStateBase`, `PolicyResult`, `requests.ListInstanceagentAvailablePluginsRequest`, `MyState`, `IProcedure`, `ExpNumBop`, `HsSensorUnit`, `DeploymentTable`, `IdentityMap`, `HandlerContext`, `Ok`, `MetaverseService`, `TensorListMap`, `TEntityRecord`, `WithItemNode`, `InputManager`, `NumberNodeParams`, `InMenuEvent`, `ConstantBackoff`, `NoopExporter`, `RulesetVariable`, `TimelinePoint`, `ConfigureLogsCommandInput`, `angular.IPromise`, `t.CallExpression`, `ComponentLoaderFactory`, `GetPropertiesResponse`, `CommandConfig`, `JassTimer`, `ExtremaOptions`, `BindGroupLayout`, `AbiItemModel`, `IExtensionActivationResult`, `ClassStruct`, `ModelLifecycleState`, `AliasHierarchyVisitor`, `WsHttpService`, `PageBuilderContextObject`, `Attributions`, `IValidatorConfig`, `JSXElementAnalysis`, `L13Element`, `IconInfo`, `Transaction.Info`, `UpgradePlugin`, `Comp`, `HttpRequest`, `cdk.GetContextValueResult`, `ICategoryBin`, `AndroidAction`, `DangerResults`, `TextEdit`, `RewardTicket`, `ObjectTypeDefinitionNode`, `AssertTrue`, `CallbackFunction`, `MDCChipActionAttributes`, `MatchedRoute`, `TextDelta`, `ITransport`, `GreenBean`, `DocumentSettings`, `Message`, `ClientSocket`, `SignatureProvider`, `SpecialKeyMatchResult`, `HTMLSourceElement`, `CreateGroupCommand`, `specificity.Specificity`, `RoleState`, `TypeWithInfo`, `Start`, `MeshData`, `ChatThreadClientState`, `LimitItem`, `Direction`, `IConnections`, `FormatCodeOptions`, `CdsInternalOverlay`, `ScriptLikeTypes`, `Fork.Fork`, `WebKitEntry`, `ApplicationStateMeta`, `ReBond`, `ConceptServer`, `WebGLSampler`, `NoOpStep`, `EntireGame`, `amqplib.ConfirmChannel`, `SlackOptions`, `IFunctionIdentifier`, `RegisterParams`, `Renderer2`, `IOriginNode`, `d3Selection.Selection`, `AnalyticsProperties`, `RepoNameType`, `CategorySummary`, `MultiMap`, `InfiniteScrollDirective`, `JSONCacheNode`, `Drawable`, `ScalarCriteriaNode`, `VulnerabilityAssessmentPolicyBaselineName`, `SourceNodesArgs`, `iam.Role`, `PDFCrossRefStream`, `CheckSearchSessionsDeps`, `AsyncArrayCallback`, `DefaultReq`, `Level3`, `XMenuNode`, `G6Event`, `CommentKind`, `VariantHandler`, `MethodDocumentationBlock`, `Outline`, `Ref`, `VariableExpression`, `WKNavigation`, `WriteValueOptions`, `BodyAndHeaders`, `SEErrorRefresh`, `BillDebtor`, `ServiceSpy`, `InternalCoreSetup`, `FeeOption`, `TreeModelSource`, `CarouselConfig`, `EngineResponse`, `WorldCoordinates`, `SystemIconProps`, `Crdp.Runtime.StackTrace`, `ParseQueryOutput`, `BitbucketUserRepository`, `NetworkDefinition`, `ScheduledCommandInfo`, `CoingeckoApiInterface`, `SeriesConfig`, `ExtrudedPolygonTechnique`, `SearchableContainerInput`, `LoggerProxy`, `RLYTTextureMatrix`, `TaskQueue`, `REPLServer`, `NormalizedVertex`, `Expression`, `StateLeaf`, `AxiosResponse`, `TextEditorSelectionChangeEvent`, `LucidModel`, `PutRetentionPolicyCommandInput`, `UpdateTargetDetectorRecipe`, `PositionContext`, `TrackerEvent`, `mod.TargetGroup`, `messages.Feature`, `BlockAtom`, `StructureRoad`, `HapiAdapter`, `AnyCurve`, `TrackedImportFrom`, `MessageData`, `TransmartDimension`, `ElMessageBoxOptions`, `ThyTableColumnComponent`, `FoamGraph`, `ApiProxy`, `Contracts`, `FoldersService`, `SvgIconProps`, `IDistro`, `StoreView`, `AvatarOverload`, `skate.Component`, `SelectProps`, `VoilaGridStackPanel`, `Interval`, `requests.GetWorkRequestRequest`, `PointInfo`, `ActionEvent`, `UseLazyQueryReducerAction`, `IDSLCodeState`, `CellDrag`, `PrivilegeFormCalculator`, `IdQuery`, `TypeConsApp`, `ShadowController`, `MeaningfulDependency`, `CreateObservableOptions`, `Line2`, `MockStream`, `ReduxAppState`, `AttrEvaluationContext`, `StateAction`, `SqlManagementClient`, `ExtUser`, `http.OutgoingHttpHeaders`, `TestViewController`, `ProjectDto`, `VCSConnector`, `TScheduleData`, `AggParamsAction`, `MediaStreamAudioSourceNode`, `XYZNumberValues`, `CaseNode`, `AuthPermissions`, `ILogService`, `com.google.firebase.firestore.FirebaseFirestoreException`, `MenuApiResult`, `KeystrokeAction`, `RelationClassDecorator`, `GrpcResponseMessageData`, `TemplateChildNode`, `ElementProperty`, `VideoPlayer`, `ILayerDefinition`, `requests.ListDedicatedVmHostsRequest`, `HeadingSize`, `ProofCommand`, `TodoState`, `DMMFPAS_Directives`, `InsertDelta`, `SpyTransport`, `Base64`, `PersonService`, `ValueToken`, `Polygon`, `Shell`, `PathStyleProps`, `IGetContentOptions`, `ISettingsState`, `RLYTPaneBase`, `DataTypeFieldConfig`, `IBaseImageryPluginConstructor`, `IPluginOptions`, `MachinomyOptions`, `TemplateState`, `TokensBuffer`, `ToplevelT`, `SwitchStatement`, `CalendarData`, `HitDatabaseEntry`, `Table`, `LatestControllerConfigType`, `CropperTouch`, `ChatErrorTarget`, `CallbackDisposable`, `IColorableSequence`, `NotifyOpts`, `HSLColor`, `AWSLambda.Context`, `MetricResult`, `requests.ListIPSecConnectionTunnelRoutesRequest`, `ContractDefinition`, `commonLib.IExtra`, `Directive`, `AuthScopeValues`, `ReactRef`, `IDateRange`, `config`, `SpeakerInfo`, `FileAccessData`, `MatDialogConfig`, `Pipette`, `ITestFluidObject`, `EventInitDict`, `IComputedValue`, `DialogInput`, `IMarkerData`, `UnitType`, `Texture_t`, `Badge`, `TransformFactoryContext`, `DaffPaypalReducerState`, `UseTransactionQuery`, `Inspection`, `KeyHandler`, `UiLanguage`, `GX.CullMode`, `AsyncThunkPayloadCreator`, `PartialBot`, `CopyTranslateResult`, `ProjectInformation`, `ITionPlatformConfig`, `DecodedSignature`, `ModalManager`, `Agency`, `NamedImports`, `AllPackages`, `LocationReference`, `BackstageItemState`, `IDataObject`, `BrandService`, `DeleteInstanceCommandInput`, `Allure`, `PipelineStageUnitAction`, `LexContext`, `OfficeFunction`, `EvalResponse`, `RowFormatter`, `ThemeName`, `EmusakEmulatorsKind`, `ContractAddressOrInstance`, `IOdspAuthRequestInfo`, `TextureId`, `Tensor4D`, `NavigationLocation`, `StorefrontApiContext`, `IClothingStore`, `RouteDependencies`, `ISettingsDataStorePayload`, `StorageContainer`, `IApiInfo`, `DeleteParameterGroupCommandInput`, `RadixTreeNode`, `ISelEnv`, `MacroHandler`, `TargetGroup`, `TimelineKeyframeStyle`, `OAuthTokenResponse`, `NumberContext`, `ExpressLikeRequest`, `NetworkFilter`, `DescribeAutoScalingGroupsCommandInput`, `ImageEffectDirector`, `DictionaryKeyEntryNode`, `StructResult`, `LogMatchRule`, `DocumentMigrator`, `SavedObjectsService`, `RequestBuilder`, `HSD_TECnst`, `DesignedState`, `FacadeConverter`, `MockedRequest`, `Levels`, `Http`, `TTypeName`, `NavigateFunction`, `d.MsgToWorker`, `ScriptCache`, `GitHub`, `NodeSnapshot`, `QueryOpts`, `IMenuState`, `MDCTextFieldOutlineAdapter`, `PartSymbol`, `BasicScene`, `BitcoinishPaymentTx`, `AstExprState`, `RootDispatch`, `types.NestedCSSProperties`, `ReadModelRequestEnvelope`, `TemplateClient`, `BackupSummary`, `ValidateFn`, `MgtTemplateProps`, `ObjectValue`, `WorkflowStatus`, `SuitDone`, `FileEntity`, `SwapParams`, `EffectHandlers`, `NodeContainer`, `requests.ListCrossconnectPortSpeedShapesRequest`, `AnnotationRectProps`, `MarkerDoc`, `IWalletContractServiceStrategy`, `IPackageJson`, `ThyTooltipDirective`, `OutputTargetDistCustomElementsBundle`, `IParsedPackageNameOrError`, `Directories`, `IWorldObject`, `SynthesisVoice`, `requests.ListIPSecConnectionTunnelsRequest`, `CdkToolkit`, `DynamicEllipseDrawerService`, `KeyboardDefinitionSchema`, `CursorDirection`, `PreviewDataImage`, `TreeNodeInBlock`, `FoodsFilter`, `IInspectorListItem`, `UpdateChanges`, `EitherAsync`, `ParamContext`, `ZonesManager`, `ServerStyleSheets`, `StripePaymentSession`, `FolderPreferenceProvider`, `Break`, `DraggableInfo`, `TypedColor`, `webpack.Compilation`, `FieldNamePath`, `OrganizationPostData`, `RequestPolicy`, `ProviderResult`, `SprottyDiagramIdentifier`, `TransliterationFlashcardFieldName`, `Types.SocPromise`, `Answer`, `PiInstance`, `ts.CompilerHost`, `OutlineCreateTag`, `LitecoinAddressFormat`, `AudioResource`, `IFunctionIndex`, `Services`, `NotificationsState`, `FallbackProps`, `IRelatedEntities`, `BootJsonData`, `NamedTensor`, `BatchWriteRequest`, `DecoderOptions`, `RouteInfo`, `NativeCallback`, `MenuPositionY`, `ThumborMapper`, `IERC20ServiceInterface`, `MigrateOptions`, `PutPublicAccessBlockCommandInput`, `TypeWithId`, `MalType`, `ITexture`, `TheTask`, `Sender`, `MetricModalProps`, `JsonRpcResponse`, `W4`, `TokenEndpointResponse`, `WorkflowMap`, `CreatePhotoDto`, `React.TransitionEvent`, `KeyResultTemplate`, `BranchInfo`, `ComponentCompilerPropertyComplexType`, `ts.CallExpression`, `CreatedTheme`, `SectionModel`, `Yaz0DecompressorWASM`, `requests.ListSecretBundleVersionsRequest`, `IUserProfileViewState`, `Database.Replica`, `ConfirmProps`, `ActionService`, `FunctionSignature`, `LogAnalyticsParserFunction`, `LoginAccountsValidationResult`, `LegendEntry`, `Quadrant`, `perftools.profiles.IProfile`, `PositionTranslate`, `iI18nConf`, `BinaryShape`, `AType`, `ResourceFile`, `SaveManager`, `serializedNodeWithId`, `UrlState`, `Int64Value`, `Warrior`, `XYChartScrollbar`, `Headers`, `IUILayoutViewController`, `Field.PatchArgs`, `EventTouch`, `AuthorizationPayload`, `LayoutComponent`, `ListPolicyVersionsCommandInput`, `Range1d`, `MotionInstance`, `TransformerHandle`, `OrderedSet`, `PackageEntry`, `ICandidateCreateInput`, `ToolbarItemsManager`, `TimefilterContract`, `ConsumerExtInfo`, `AddressBook`, `GitHubRepo`, `BlendOperation`, `ITransactionRequestConfig`, `IncompleteSubtypeInfo`, `SortedPatchList`, `TypeResolvingContext`, `NetworkRequestId`, `TreeDiagramNode`, `SensorType`, `ConvoState`, `InfoType`, `IServerFS`, `InputEventMouseMotion`, `SyntaxModifier`, `TKeys`, `CdsButton`, `DATA`, `cToken`, `LiteralType`, `TextContextTypeConvert`, `UseContextStore`, `CustomSecurityService`, `EmojiType`, `GridCellBox`, `ResponseData`, `Render`, `ContainerWarning`, `TSettings`, `SwapFn`, `GX.CA`, `PromiseOr`, `yauzl.Entry`, `DaffProductFactory`, `StructDeclaration`, `RTCRtpSimulcastParameters`, `VectorLike`, `AuthReducerState`, `next.Origin`, `Suggestion`, `InetLocation`, `IGLTF`, `TileContentRequestProps`, `code.Range`, `NgmslibService`, `SagaGenerator`, `listenerHandler`, `ClassDescription`, `NativeClarityBinProvider`, `PIXI.Point`, `MiddlewareResultFactory`, `StringAsciiCV`, `PinModelData`, `GX.IndTexStageID`, `ServiceResponse`, `Int`, `ParticipantInfo`, `Cli`, `LeftCenterRight`, `FormFieldProps`, `StyleSheetData`, `IPumpjack`, `apiClient.APIClient`, `BuildVariables`, `IFileRequest`, `ChannelPresenceEvent`, `ValueKey`, `SearchServiceSetupDependencies`, `angular.ui.IStateParamsService`, `ZenObservable.Observer`, `MapObjectAdapter`, `VertexBuffer`, `DeleteArchiveCommandInput`, `INpmDependency`, `ColorRgb`, `ApiKeyHandler`, `DebugProtocol.NextResponse`, `ApiDeclaration`, `TransationMessageOrObject`, `api.ISummaryTree`, `ClarityValue`, `ts.BuilderProgram`, `BuildrootUpdateSession`, `NgModuleMetadata`, `SpeciesName`, `HostWithPathOperationCommandInput`, `EnumId`, `StreamableRowPromise`, `LabDirectory`, `SignatureProviderResponseEnvelope`, `AMapService`, `INote`, `ToneOscillatorType`, `CorsOptions`, `GfxSampler`, `IDatabaseApiOptions`, `NativeError`, `IDBPObjectStore`, `TileMapAssetPub`, `ScreenViewport`, `ComplexError`, `ElementContent`, `IDatabaseDataDocument`, `DartDeclarationBlock`, `CandidateFeedbacksService`, `ParamInfoType`, `TicketsState`, `monaco.languages.FormattingOptions`, `SpanContext`, `Events.stop`, `BufferCV`, `ARRotation`, `MDCChipAnimation`, `TSTypeAnnotation`, `MPPointF`, `IPersona`, `CreateDeliverabilityTestReportCommandInput`, `MaskModel`, `IUserOptions`, `IBeaconConfig`, `TexMtx`, `Month`, `TodoTask`, `XmppChatConnectionService`, `DependencyContainer`, `SpaceProps`, `DispatchedAction`, `DocumentPosition`, `DayPeriod`, `EdgeCalculatorHelper`, `ListAssetsCommandInput`, `ReadState`, `Funnel`, `Types.KeyValue`, `CompatibleValue`, `LayoutParams`, `TimeHistory`, `EditableBlock`, `FriendList`, `EventSink`, `TRecursiveCss`, `AvatarService`, `Mutation`, `ILocalizationFile`, `IExcludedRectangle`, `RequestResponseLog`, `ICellData`, `CompilerSystemCreateDirectoryResults`, `LifecyclePolicy`, `LotTypeOption`, `HashEntry`, `SSRMiddleware`, `ThrottlerHelper`, `Tsoa.Metadata`, `HTMLChar`, `CustomQueryModel`, `MultilevelSensorCCReport`, `BinaryType`, `requests.ListDbHomePatchHistoryEntriesRequest`, `IProcessedStyleSet`, `ProviderRange`, `MockPlatform`, `TestFileInfo`, `IFakeFillerOptions`, `StoreOptions`, `PositionDirection`, `DeleteRepositoryCommandInput`, `NodeDecryptionMaterial`, `StringTokenFlags`, `TestMessage`, `AppStackOs`, `SeekProcessor`, `ArgumentInfo`, `ArtifactFrom`, `PixelMapTile`, `HTMLScriptElement`, `OperationMetadata`, `ControlSetItem`, `DIDDocument`, `NodeInputs`, `Node_Struct`, `FlowVariableAnnotation`, `TransactionEnvelope`, `SDLValidationContext`, `ListRetainedMessagesCommandInput`, `ALL_POSSIBLE_CHART_TABS`, `ControllerValidateResult`, `OffersState`, `PairTree`, `EndUserAgreementService`, `SimplePath`, `ValidatedOptions`, `requests.GetRRSetRequest`, `ApmFields`, `LabelCollector`, `ChildExecutor`, `ItemsOwner`, `SelectionsBackup`, `PartialReadonlyContractAbi`, `IWorkerChannelMessage`, `MessageDoc`, `ts.ResolvedModuleFull`, `GraphData`, `NominalTypeSignature`, `TextRangeWithKind`, `FileStorage`, `CoreProcessorOptions`, `PubKey`, `MDNav`, `ScannedMethod`, `ExportedData`, `FabricEnvironmentTreeItem`, `ReadableStreamDefaultController`, `TypeVarScopeId`, `VideoGalleryRemoteParticipant`, `KBN_FIELD_TYPES`, `CoinHostInfo`, `INodeMap`, `UISession`, `PoiTableEntryDef`, `CompilerWatcher`, `PerpMarketInfo`, `ControlType`, `AuthActionTypes`, `StagePanelLocation`, `ParsedResult`, `SavedObjectsMappingProperties`, `Compilation`, `LocaleTree`, `CreateDatasetGroupCommandInput`, `DidactPanel`, `PDFStream`, `Fees`, `EventDoc`, `ISearchDataTemplate`, `CriteriaFilter`, `VpnClientParameters`, `AccountRepositoryLoginResponseLogged_in_user`, `ExchangeContract`, `PNLeaf`, `QueryCommandOutput`, `MatchedSelector`, `PositionLimitOrderID`, `BandcampSearchResult`, `VNodeChild`, `Bytes`, `Field`, `Secured`, `DeleteIntegrationCommandInput`, `CanvasFontSizes`, `ICeloTransaction`, `SelectSeriesHandlerParams`, `d.VNode`, `VisiterStore`, `React.FocusEvent`, `ColdObservable`, `InternalStores`, `WidgetResolveResult`, `SentenceNode`, `RouterConfig`, `ko.Observable`, `CourseStore`, `ExecutableCallRegular`, `Suggester`, `pxtc.ExtensionInfo`, `Cubemap`, `AsyncOrderedHierarchyIterable`, `TestIntegerIterator`, `IPanelProps`, `GltfPreviewPanelInfo`, `CookieOptions`, `RelayerTypes.PayloadEvent`, `ColorLike`, `ITransitionActions`, `ElementStyle`, `AutoUVBox`, `CoursesService`, `ISiteState`, `ChannelHandler`, `OneHotVector`, `ApiError`, `RailsWorkspace`, `HighlighterCellsProps`, `RelationModel`, `MustacheFile`, `IBrowsers`, `PaymentState`, `WebsocketRequestBaseI`, `Measure`, `StateData`, `HashSetStructure`, `TokenSigner`, `DateFilter`, `EventDecorator`, `UnparseIterator`, `ODataSchema`, `Dim`, `DSTInfo`, `Snapshots`, `SqrlParserState`, `VideoDetails`, `AccessTokenInterface`, `PartialGestureState`, `requests.ListAppCatalogSubscriptionsRequest`, `TkmLogger`, `TinyCallContext`, `UpdateExpression`, `FontMetrics`, `CPoolSwap`, `Type_AnyPointer_Parameter`, `MakeHookTestStep`, `License`, `SavedObjectsBaseOptions`, `MessageAttributeValue`, `UnaryFunction`, `CommandOptions`, `VSCodeBlockchainOutputAdapter`, `MatchJoin`, `InstanceKey`, `GitHubRepositoryModel`, `BRepGeometryFunction`, `UseBodilessOverrides`, `BuilderRuntimeNode`, `AstNodeMerger`, `EsmpackOptions`, `android.os.Bundle`, `ConsoleInterface`, `sinon.SinonStatic`, `RpcProgram`, `MapDispatchToPropsParam`, `BriefcaseDbArg`, `SequelizeOptions`, `CreateBotVersionCommandInput`, `DialogPropertySyncItem`, `TextStylePropsPart`, `IWaveFormat`, `MakeRequest`, `SModelRootSchema`, `OperationMethod`, `MIREntityType`, `StandardResponse`, `PropertyASTNode`, `Size`, `SavedObjectsBulkResponse`, `SankeyDiagramNode`, `PDFOperator`, `NzIconService`, `ProjectedXYArray`, `QuestionService`, `ThemeSpec`, `WithName`, `TimesliceMaskConfig`, `BasicRoller`, `Spy`, `TSESTree.Node`, `RedisCommand`, `BusinessAccount`, `SFATexture`, `BoxVo`, `EventSourceHash`, `SlashCreator`, `$N.NeighborEntry`, `CartItem`, `Checksum`, `core.App`, `PiProjection`, `Provider`, `ScannedImport`, `CategorizedOption`, `Interface`, `ArgumentContext`, `IAuthor`, `Appointments.AppointmentProps`, `CmsStructureConfig`, `RemoveGroupControlAction`, `BaseEventOrig`, `UserContext`, `WebAppStack`, `IModelOptions`, `ApiDefinitions`, `SectionItem`, `BirdCount`, `VisualizationProps`, `BinarySensorCCAPI`, `OrderedMap`, `UniqueNameGenerator`, `MigrateFunctionsObject`, `ContentRect`, `d.EntryModule`, `ChatPlugContext`, `FieldAgg`, `Triplet`, `YearProgressService`, `TaskType`, `item`, `CircleResponderModel`, `FormatterSpec`, `FetchedPrices`, `IPCResult`, `StorexHubApi_v0`, `Board`, `LockedGoldInstance`, `EnumHelper`, `SystemMouseCursor`, `GX.TexFilter`, `DebugProtocol.PauseResponse`, `CompilationResult`, `IssueSummary`, `Inner`, `MlRouteProps`, `ContextBinding`, `PutAccountsValidationResult`, `PopulatedFolderDoc`, `FaceletT`, `BatchSerialization`, `PokerHandResult`, `PluginNamingConfiguration`, `RecursiveXmlShapesCommandInput`, `MachineParseResult`, `Rx.Subject`, `TspClient`, `InputParallelism`, `StringAnyMap`, `DAL.DEVICE_ID_LIGHT_SENSOR`, `LoadStrategy`, `cg.Role`, `HealingValue`, `PLSQLSymbol`, `LanguageServerConfig`, `SerializerState`, `DialogOptions`, `EventArgDeclaration`, `DocumentDecoration`, `BrowserWindow`, `ClientAssessments`, `FilePickerBreadcrumbItem`, `FactoryKey`, `Model.Project`, `XmlNode`, `PluginEvent`, `apid.ReserveEncodedOption`, `FragmentSpreadNode`, `Page`, `ts.FormatDiagnosticsHost`, `Diagonal`, `DragPanHandler`, `PackageNode`, `ParsedValue`, `parse5.DefaultTreeDocument`, `SpywareClass`, `WithNumber`, `LoginData`, `IconSize`, `Path3`, `LinkTransport`, `IndexPatternsFetcher`, `ManagementOption`, `Brackets`, `ResponderExecutionStatus`, `TemplatePatcher`, `IModalProps`, `firebase.database.Reference`, `RefreshService`, `ISendOptions`, `OperationArguments`, `ERC1155PackedBalanceMock`, `NothingShape`, `ReactiveEffect`, `MemberMethodDecl`, `ICustomField`, `ElementRunArguments`, `RenderService`, `TargetDetectorRecipe`, `ThyIconRegistry`, `AnimationState`, `IDinoContainerConfig`, `ToastConfig`, `TestStruct`, `INodeWithGlTFExtensions`, `Draw`, `CreateExceptionListItemSchema`, `SelectPartyToSendDelegate`, `IContextLogger`, `StatusPublisher`, `IDatabaseDataAction`, `CheckPrivilegesPayload`, `TypeInference`, `PiNamedElement`, `RefInfo`, `TestERC20Token`, `OperationResponseDetails`, `Vertice`, `ReLU`, `IMatchingCriterions`, `Subscriptions`, `IMyDateModel`, `TCmdData`, `SModelElement`, `StreamPipelineInput`, `DCollection`, `UpdateAppCommandInput`, `Lit`, `SpriteWithDynamicBody`, `NpmFileLocation`, `StringValue`, `IndicatorValuesObject`, `Crdp.Runtime.RemoteObject`, `ZeroExTransactionStruct`, `WS`, `SUPPORTED_FIELD`, `CW20Currency`, `NjsActionData`, `BoxShadowItem`, `BarChartOptions`, `ng.auto.IInjectorService`, `LocalFilter`, `EsQueryAlertParams`, `IInputProps`, `SubscriptionHandler`, `CardTagsProps`, `GroupsGetterFn`, `ColorOp`, `GeneratorResult`, `ts.TransformerFactory`, `DeleteApplicationResponse`, `ChangePassword`, `ListNotebookSessionsRequest`, `FetchLinks`, `FSMCtx`, `AzureTokenCredentialsOptions`, `BasicIteratorResult`, `ValidatorsFunction`, `FluentNavigator`, `ValueConstraint`, `OnModifyForeignAction`, `GeneralStorageType`, `model.Range`, `WorldgenRegistryKey`, `SidenavState`, `ExpressionRenderDefinition`, `Vector4d`, `AtomOptions`, `InventoryStore`, `SVGRenderer`, `Configs`, `ModulePath`, `ListApplicationsCommandInput`, `OPCUAServer`, `QueryByBucketMethod`, `DeleteWorkflowCommandInput`, `GetRowIdFn`, `TextOp`, `Deployment`, `MappingParameters`, `StorageUtility`, `TextDocumentSyncOptions`, `PlatformRef`, `ItemCount`, `TerminalProcess`, `Listeners`, `OwnerKeyInfoType`, `Photo`, `DbResult`, `SxChar`, `WalletProviderInfo`, `SpaceQuery`, `ITableMarker`, `TileDocument`, `GraphRewriteBuilder`, `MultiStepInput`, `IDatabaseDriver`, `VoiceFocusTransformDeviceObserver`, `NSURLSession`, `CollisionStartEvent`, `FeeEstimateResponse`, `Accounts`, `RelationEntry`, `RestyaboardItem`, `HTMLBodyElement`, `LROperation`, `IReaderState`, `ChildGraphItem`, `Vec4`, `BrowserExceptionlessClient`, `ASSymbolType`, `WechatyInterface`, `InterfaceRecursive`, `FlowFlags`, `ESLNote`, `MultisigAddressType`, `GmailResponseFormat`, `ModelSchemaInternal`, `Web3Callback`, `GetConnectionCommandInput`, `TimelineDragEvent`, `TeamWithoutMembers`, `AuthenticateAppleRequest`, `ARAddBoxOptions`, `SortableSpecService`, `OrderTemplatesDetailsPage`, `UpdateExperimentCommandInput`, `MockS3`, `UserRepository`, `Movimiento`, `BaseClusterManager`, `Vector2D`, `InitialArgv`, `Circuit`, `Discussion`, `DeleteResourcePolicyCommand`, `ForumActionType`, `validateTrigger`, `PhysicsHandler`, `puppeteer.ConnectOptions`, `mapTypes.YandexMap`, `CSharpResolversPluginRawConfig`, `ValidationParamSchema`, `grpc.Client`, `InternalServiceError`, `ActionFactoryDefinition`, `TreeModelNode`, `IOSIconResourceConfig`, `VariantCreateInput`, `DomRenderer`, `SnapshotDiff`, `d.DevServer`, `VdmFunctionImportReturnType`, `AvatarSource`, `TestObserver`, `DayOfWeek`, `InitiateResult`, `NgRedux`, `Box2Abs`, `LoaderAttributes`, `ParsedIdToken`, `DeleteAttendeeCommandInput`, `DiagnosticSeverity`, `Trackable`, `IPlDocVariablesDef`, `ɵɵInjectableDef`, `IEBayApiRequest`, `MerchantUserService`, `SnakeheadDataTable`, `Subnet`, `MapRendererParameters`, `QueryDeploymentRequest`, `PersistencyPageRange`, `VisualizationContainerProps`, `EbmlElement`, `ContainerInstance`, `TypedData`, `LGraphCanvas`, `ReacordInstance`, `DocumentTypes`, `FakeDatasetArgs`, `WatchService`, `BuildProps`, `d.HydrateAnchorElement`, `BaseFunction`, `ScoreRecord`, `Validation.Result`, `IRowMarker`, `UnivariateBezier`, `PublicAppDeepLinkInfo`, `PyteaServer`, `RadioButtonViewModel`, `Oscillator`, `TerminalCommandOptions`, `runnerGroup`, `MongoEntity`, `PDFAcroForm`, `ComponentInstance`, `BoundingSphere`, `PatchDocument`, `DatabaseConnection`, `MicrosoftDocumentDbDatabaseAccountsResources`, `EmitResult`, `HttpServerOptions`, `Selectors`, `AbstractHttpAdapter`, `AllocatedNode`, `CurriedFunction5`, `RoomInterface`, `FuncKeywordDefinition`, `CodeGenExecutionItem`, `GaugeEvent`, `EthApi`, `RedioPipe`, `TaskState`, `TNSPath2DBase`, `FABRuntime`, `PageInterface`, `ProcessedPublicActionType`, `ArrayBufferSlice`, `UniversalRouter`, `Uint8ClampedArray`, `SearchResourcesCommandInput`, `PropInfo`, `WorkspaceLeaf`, `InvalidConfig`, `EditorView`, `ThemeTypes`, `IGetEmployeeJobPostInput`, `TemplateResult`, `IDropboxAuth`, `Recordable`, `SynWorkspace`, `AlgWithIssues`, `ParamMetadataArgs`, `VideoTile`, `GameManager`, `HomeState`, `Notification`, `SelectMenuItem`, `ThingDescription`, `ExpressionParseResult`, `ICompilerOptions`, `Pages`, `SelectOptionConfig`, `Viewer.Texture`, `CreateConnectionResponse`, `ApiOperationOptions`, `PluginMetadata`, `QuickFixQueryInformation`, `IAssets`, `SecurityGroup`, `DbLoadCallback`, `DataTablePagerComponent`, `TupleIndexOpNode`, `VersionStatusIdentifier`, `PyTypedInfo`, `CustomFieldDefinition`, `RepositoryEsClient`, `ClipPlaneContainment`, `FragmentDefinitionNode`, `StateCallback`, `PythonShellError`, `PlanNode`, `VariableInfo`, `StepIterator`, `AuthState`, `SWRKey`, `SpreadSheetFacetCfg`, `blockClass`, `IFabricGatewayConnection`, `IDBPCursorWithValue`, `OtherActionsButtonProps`, `SubjectDataSetJoin`, `FindOptions`, `JRPCEngine`, `TypedClassDecorator`, `IViewer`, `EmptyAsyncIterable`, `MouseDownAction`, `ConsensusMessage`, `ExtensionProperty`, `TimeChartSeriesOptions`, `electron.BrowserWindow`, `RectDataLabel`, `T10`, `ContentConfigurator`, `ServerWrapper`, `SubscriptionData`, `KeyPairBitcoinCashPaymentsConfig`, `JumpState`, `JWT`, `Address`, `SessionContent`, `ContentType`, `FieldDef`, `PromiseState`, `TypeAssertionSetValue`, `BinaryMap`, `PartiallyParsedPacket`, `Json.Value`, `GetComponentCommandInput`, `restify.Next`, `SqlTuningAdvisorTaskSummaryReportIndexFindingSummary`, `JavaDeclarationBlock`, `TypeDefs`, `LinkFacebookRequest`, `OcsNewUser`, `BackendDetails`, `d.JsDoc`, `DocSegmentKind`, `DeployStatusExt`, `CreateArticleDto`, `GitInfo`, `AudioBufferSourceNode`, `NextRequest`, `HTMLIonSegmentButtonElement`, `Giveaway`, `NotificationComponent`, `AsyncSystem`, `UiCalculator`, `ValidationProfileExt`, `TeamsMembersState`, `IChannelStorageService`, `AtomId`, `AsyncFluidObjectProvider`, `Jest26Config`, `DirectoryReader`, `DomElementGetter`, `IWorkflowPersona`, `Admin`, `IReferenceSite`, `PackagePolicyInput`, `Odb`, `DebugElement`, `RPCRequestPayload`, `BackgroundFilterOptions`, `ControlBarButtonProps`, `Jenkins`, `GetFindingsCommandInput`, `theia.Task`, `TutorialService`, `Bool`, `ConfigureResponse`, `StorageMeta`, `Pass`, `BackstageManager`, `Progress.ITicks`, `ITemplateBaseItem`, `ITypescriptServiceClient`, `ITableField`, `StreamingClientInfo`, `UseTransactionQueryReducerAction`, `EnumerateType`, `AllMdastConfig`, `NodeJS.WriteStream`, `TlaDocumentInfos`, `RevocationReason`, `AppHistory`, `Artifact`, `WeChatInstance`, `ODataClient`, `IdentNode`, `btRigidBody`, `FoodReducerState`, `SpineHost`, `TypingIndicatorStrings`, `CleanupType`, `ForeignInterface`, `BluetoothServiceUUID`, `RoundArray`, `ConnectionRequest`, `TagType`, `ICodeGenerationOutput`, `Construct`, `requests.ListBdsApiKeysRequest`, `Fault`, `RequestError`, `PackageDependencies`, `TestFrontstage`, `vscode.DocumentFilter`, `GLTexture`, `NumberFormatter`, `ProblemType`, `StringColumn`, `IncomingWalletConfig`, `GachaDetail`, `Deno.Listener`, `FormErrorProps`, `TableListViewProps`, `DescriptorProto_ExtensionRange`, `ILatLng`, `ArangoDB.Collection`, `DidState`, `TData`, `CSSLength`, `RawExpression`, `PrivateEndpointConnection`, `AsyncOperation`, `FilteredHintItem`, `Analysis`, `SwapTable`, `Bracket`, `FlowWildcardImport`, `AxeScanResults`, `SorterResult`, `FaucetConfig`, `Entire`, `InterfaceType`, `CryptoWarsState`, `Sig`, `HeadProps`, `StatusReport`, `SequenceTypes.Participant`, `Highcharts.AnnotationEventEmitter`, `EntityCacheSelector`, `SimulationState`, `GetVpcLinksCommandInput`, `AoptB`, `MetricsGraphicsEventModel`, `AggConfigsOptions`, `TestTracer`, `ModuleInterface`, `SPClientTemplates.FieldSchema_InForm`, `PreConfiguredAction`, `ITreeEntry`, `SortDirection`, `CombineOutputResult`, `_TsxComponentV3`, `RGBA`, `SensorGroup`, `NamedObject`, `DBProperty`, `IFeatures`, `BitcoinPaymentsUtilsConfig`, `IAddOrInviteContext`, `CollectionsService`, `BackendMock`, `Suffix`, `TrackedBuffEvent`, `PeerApiResponse`, `JFlap`, `Equalizer`, `MyComp`, `OpenIdConfig`, `RepositoryCommonSettingDataType`, `GetParseNodes`, `SymbolDataContext`, `HitResult`, `UpdateApiKeyCommandInput`, `GetExportCommandInput`, `HTMLParagraphElement`, `fromRepositoriesStatisticsActions.GetRepositoriesStatisticsCollection`, `CivilHelper`, `ForkStatus`, `LineDataSet`, `GenericMerkleIntervalTreeNode`, `ElementMixin`, `AuthorizationResult`, `ErrorCollection`, `TriumphFlatNode`, `Io.Reader`, `IResizeEvent`, `AudioDeviceInfo`, `AnnounceNumberNumber`, `RepositoryWithGitHubRepository`, `sourceT`, `ObjectMetadata`, `TSpanStyleProps`, `ILaunchSetting`, `LinkSession`, `Participants`, `TagName`, `StrategyParameterType`, `NavTree`, `PlatformBrowser`, `ChartsPluginStart`, `IViewInstance`, `PQueue`, `DetachVolumeCommandInput`, `NexusInputObjectTypeDef`, `FullOptions`, `messages.TableRow`, `BarStyleAccessor`, `CdkDragDrop`, `ScopedDocument`, `ComponentCompilerTypeReference`, `TypeQueryNode`, `SafeResourceUrl`, `FileService`, `EnumInfo`, `Slur`, `ITemplatedComposition`, `MicrofabComponent`, `FileEmbedder`, `ObjectMultiplex`, `ObiDialogNode`, `GetVpcLinkCommandInput`, `ResourceInfo`, `TransitionEvent`, `SASQueryParameters`, `ActionCreatorWithNonInferrablePayload`, `UnorderedStrategy`, `ObjectRelationship`, `TupleCV`, `HistoryOptions`, `DequeueSharedQueueResult`, `AuthEffects`, `StartChannelCommandInput`, `WebAppCollection`, `supertest.Test`, `InformedOpenLink`, `DownloadStationTask`, `UpdateInfo`, `com.stripe.android.model.PaymentMethod`, `WebGLShader`, `AnimationSampler`, `AzExtLocation`, `TextDirection`, `pxt.TargetBundle`, `XhrDetails`, `Screenshoter`, `LuaInfo`, `UnknownType`, `HeadingCache`, `JWTPayload`, `ApiModel`, `WeuData`, `Aux`, `IndexerError`, `requests.ListDbNodesRequest`, `JSDocTupleType`, `MongoError`, `IMailTransferAgent`, `ProofResponseCoordinator`, `SearchEsListItemSchema`, `GetRRSetResponse`, `ElasticsearchFeatureConfig`, `ICitableSource`, `requests.ListResolverEndpointsRequest`, `ExtractorContext`, `UploadAssetOptions`, `fnVoid`, `KeyPath`, `Number`, `ParsedMessagePart`, `TelemetryPluginSetup`, `ListTaskRunLogsRequest`, `ABLMethod`, `GuideEntryType`, `P4`, `ISimpleGraphable`, `BaseTelemetryProperties`, `commandParser.ParsedCommand`, `TaskRepository`, `MaybeRef`, `HTMLIonLabelElement`, `StandardChip`, `OctoKitIssue`, `RecordOf`, `Messaging.IPublish`, `TransformerContext`, `DAL.KEYMAP_MODIFIER_POS`, `PropertyConfig`, `ShipSource`, `Limit`, `Proto`, `RouteNotFoundException`, `StringUtf8CV`, `ThemeConfig`, `TokenKind`, `PieVisParams`, `WaiterConfigurationDetails`, `ViewValue`, `FormValue`, `NvRouteObject`, `AppsService`, `XPCOM.nsICategoryManager`, `Enforcer`, `GetParams`, `DeleteDomainNameCommandInput`, `SelectedUser`, `ethers.Wallet`, `OutputCache`, `LoggerContextConfigType`, `HarnessAPI`, `StyledDecorator`, `IntrinsicFunction`, `IRepositoryModel`, `GameWorld`, `RefreshOptions`, `MachineData`, `ClientRule`, `reflect.ClassType`, `RenderInput`, `ManagementAgentPluginDetails`, `ExtensionOptions`, `LovelaceCard`, `HierarchicalNode`, `PostProcessingFactory`, `InternalServerError`, `ContextMasquerade`, `ExpressLikeResponse`, `KeplrGetKeyWalletCoonectV1Response`, `ERenderMode`, `ExtHTLC`, `DirectedScore`, `AbortSignalLike`, `ManagementAppMountParams`, `AxisDataItem`, `UpdateDetectorRecipeDetectorRule`, `DefaultRequestReturn`, `ICommentData`, `RestApi`, `ScaleThreshold`, `DOMProxy`, `TDispatch`, `DataModel.CellRegion`, `StepState`, `requests.ListSendersRequest`, `CreateParameters`, `InvoiceItemService`, `DataTransfer`, `ListGroupsRequest`, `pulumi.ComponentResourceOptions`, `ActionButtonProps`, `PostRef`, `protos.common.IMSPRole`, `DescribeDBInstancesCommandInput`, `InvoiceQuotation`, `ConfigurableForm`, `ExtendedTypeScript`, `QueryFetcher`, `WechatOfficialAccountService`, `SqlTuningTaskStatusTypes`, `Span`, `IUserRepository`, `AccessListEIP2930Transaction`, `UI5Type`, `ChoiceSupportOption`, `TypeTreeNode`, `GlobalNameFormatter`, `WebSocket.Data`, `ArtifactEngineOptions`, `UITableViewCell`, `MockWebSocketClientForServer`, `puppeteer.Browser`, `ReaderTaskEither`, `StatementsBlock`, `TransmartHttpService`, `ComponentData`, `NavigationProp`, `Reserve`, `RefactorEditInfo`, `TradeablePoolsMap`, `DeferIterable`, `PackageJsonData`, `QueryMnemonic`, `server.AccessKeyId`, `TSFiles`, `IEntityModel`, `PackageJson`, `CommandDefinition`, `PullIntoDescriptor`, `Knex.JoinClause`, `DynamicFormNode`, `Trampoline`, `SecService`, `Ants`, `NgGridItemConfig`, `Ohm.Node`, `IGetItem`, `RequestChannel`, `nodes.Identifier`, `TypeAlternative`, `AccountItem`, `GetStageCommandInput`, `ApiDecoration`, `MainPageStateModel`, `ResolvedDeclarations`, `FolderInfo`, `Friend`, `Ed25519KeyPair`, `IExecutorHandler`, `UpdateCallback`, `ExportTypeDefinition`, `DOMRectReadOnly`, `BaseCommand`, `QueryFunctionContext`, `GraphQLDatabaseLoader`, `HealerStatWeightEvents`, `TableResult`, `CodeGenField`, `TimelineSpaceState`, `ITeamCardState`, `ResponseHeaders`, `BillingGroupCosts`, `DBSymbol`, `TileType`, `ManglePropertiesOptions`, `ErrorPropertiesForName`, `Float32ArrayConstructor`, `StyleManager`, `ICluster`, `ChangePasswordCommandInput`, `CreateDiagnostic`, `Axis`, `ArgumentType`, `TagEntry`, `Sinon.SinonStub`, `ProcessErrorEvent`, `RegionCardinality`, `Python`, `MarginCalculatorInstance`, `ApiTypes`, `CronJob`, `UpdateRuleCommandInput`, `DidCloseTextDocumentParams`, `DescribeConfiguration`, `DynamicLinkParameters`, `TaskManagerStartContract`, `Call`, `LabwareCalibrationAction`, `UpdateParams`, `SidePanelTransitionStates`, `SpawnSyncOptionsWithStringEncoding`, `IGetCountsStatistics`, `K.TSTypeKind`, `SetWindowProps`, `HapCharacteristic`, `DbPull`, `option`, `throttle`, `MIRGuard`, `CreateIndexCommandInput`, `SymVal`, `PoseNet`, `HealthStateFilterFlags`, `LeaderboardRecordList`, `html.Node`, `MiddlewareStack`, `EthereumERC721ContextInterface`, `PutDedicatedIpWarmupAttributesCommandInput`, `GetWebACLCommandInput`, `BlobWriter`, `Expr`, `FabSettings`, `ListRulesRequest`, `Tsoa.ReferenceType`, `DeleteDBClusterSnapshotCommandInput`, `ParserRule`, `Reflection`, `EnvelopeGenerator`, `PickingRaycaster`, `Universe`, `MatProgressSpinnerDefaultOptions`, `AxisMap`, `BatchType`, `EPNode`, `PathDefinition`, `ShippingEntity`, `LoadingState`, `ITransferProfile`, `DatabaseSession`, `StateWithNewsroom`, `YouTube`, `EmbeddableChildPanelProps`, `INodeInterface`, `ClrHistoryModel`, `Enumerable`, `PrimitiveStringTypeKind`, `CAInfo`, `ChatThreadProperties`, `IPercentileRanksAggConfig`, `Shape`, `ComplexSchema`, `Translations`, `GenerateTimeAsyncIterable`, `DocumentFormattingParams`, `ParsedTranslationBundle`, `TransportOptions`, `IDraggableProps`, `EquatorialCoordinates`, `HitBlockMap`, `Moods`, `AppThunkAction`, `EntryCollection`, `SubReducer`, `TransactionReceipt`, `Author`, `GetProfileCommandInput`, `ResultReason`, `mb.EntityType`, `AppMetaInfo`, `NT`, `XMLBuilder`, `RewriteMapping`, `MouseData`, `DescribeChannelMembershipForAppInstanceUserCommandInput`, `VisualizerInteractionTypes`, `HttpServiceBuilderWithMeta`, `WaitInfo`, `EosioActionTrace`, `JSDocTypeExpression`, `UIMenuItem`, `UserTenantRepository`, `FactoryState`, `SecurityClassOwner`, `SchemaType`, `HTMLStyle`, `IMyValidateOptions`, `ILangImpl`, `DefaultPrivacyLevel`, `SCXML.Event`, `DeleteFunctionCommandInput`, `ValueReadonly`, `IFormFieldData`, `InstancePoolPlacementSecondaryVnicSubnet`, `ComputedAsyncValue`, `ReadConditionalHeadersValidator`, `HttpProbeProtocol`, `paper.Point`, `WriterType`, `BoxSizer`, `InitObject`, `FileReference`, `ItemSearchResult`, `IExtension`, `Highlight`, `GameDataState`, `RunHelpers`, `GuiObject`, `JSONSchemaObject`, `CreateTag`, `AggregationFrame`, `BadgeStyle`, `WorkspaceFolderSetting`, `IObject`, `Intf`, `EditorsService`, `ServiceKey`, `FavouritesState`, `ast.SeqNode`, `GoToProps`, `MultipartFileContract`, `PatternSlot`, `MpqHash`, `SMTVar`, `ISshSession`, `MockStore`, `CallHook`, `ConnectionStatus`, `XRReferenceSpace`, `IEntityInfo`, `TimelineKind`, `SuiModalService`, `News`, `OptionsReceived`, `LanguageHandlers`, `PassthroughLoader`, `PanelMode`, `IMonthAggregatedEmployeeStatistics`, `Queued`, `SMTConst`, `MyServer`, `TasksEntityStore`, `IPublish`, `UIDatePicker`, `EnvSection`, `MeterCCSupportedReport`, `Memory`, `NodeJS.ProcessEnv`, `ContentRef`, `ModelPath`, `PublishState`, `UseGeneric`, `SerializedHouse`, `NodeView`, `GetShapeRowGeometry`, `ECPair.ECPairInterface`, `token`, `IStreamChunk`, `SendMessagePayload`, `T.Action`, `Commands`, `IndexTreeItem`, `SolutionToSolutionDetails`, `PluginAPI`, `DescribeChannelCommandInput`, `CoreTypes.TextDecorationType`, `BatchSync`, `RequestData`, `FlattenLevel`, `AssociationCCReport`, `TelemetryPluginConfig`, `Predicate2`, `Geopoint`, `ODataEnumType`, `Aabb2`, `pxt.Package`, `HttpCacheService`, `Point2DData`, `InputEvent`, `CreateDatasetCommandInput`, `SessionEvent`, `LedgerWriteReplyResponse`, `DefaultKeys`, `TextInputVM`, `Building`, `EncodedPaths`, `fopAcM_prm_class`, `Consensus`, `VirtualModulesPlugin`, `GridDataState`, `ClassNames`, `AsyncEvent`, `FrontmatterWithDefaults`, `ACTION`, `DocumentData`, `ValidResourceInstance`, `TopicForIndicator`, `nconf.Provider`, `TheSimpleGraphQLServiceStack`, `CubeFace`, `DBClusterRole`, `PacketHandler`, `RSV`, `TransitionType`, `ValueJSON`, `UrlParams`, `UnsignedOrder`, `TransformationResult`, `Squiss`, `ISubprocessMessageBase`, `GainNode`, `JSONSchema`, `WritingSettingsDelegate`, `vscode.TreeView`, `AccountsInstance`, `Duration`, `FcCoords`, `DIDDataStore`, `DVector3d`, `IObserverCallback`, `ParsedRequestUrl`, `FullscreenOptions`, `TriggerForm`, `NodeInfo`, `ODataApiOptions`, `FieldState`, `ObjType`, `IsRegisteredFeatureHandlerConstraint`, `DeleteResourceCommandInput`, `ChatCommand`, `AppDataType`, `IosTargetName`, `SyncMemoryDebe`, `GLProgram`, `MessagingSessionObserver`, `TExpected`, `UpdateConnectivityInfoCommandInput`, `ISize`, `IGetSurveyModelsResponse`, `Participant`, `PowerPartial`, `StopHandle`, `HttpProvider`, `Typography`, `HashBucket`, `VisTypeTimeseriesRequestHandlerContext`, `StopApplicationCommandInput`, `BookingVariant`, `ExampleData`, `AaveV2Fixture`, `LContainer`, `PathFinderGoal`, `GaugeStatus`, `DiagnosticTag`, `AssetPropertyVariant`, `RTCRtpCodingParameters`, `AutomationEvent`, `MdastNodeMap`, `Deno.Conn`, `ControllerRenderProps`, `ProjectSummary`, `ApplicationShell.Area`, `WatcherOptions`, `ISimpleType`, `AnimDesc`, `DescribeEventsCommandOutput`, `SecureTrie`, `UrlOptions`, `DecisionPathPlotData`, `CreepActionReturnCode`, `NetworkSecurityGroup`, `PublishedFurniture`, `MenuOptions`, `SerialAPIVersion`, `PlaneBufferGeometry`, `NamedMember`, `MalVal`, `ExpressionNode`, `AdaptElement`, `_MessageConfig`, `StreamLabsMock`, `SonarQubeApiComponent`, `StaticFunctionDecl`, `AnimationChannel`, `UserConfigSettings`, `EmailModuleOptions`, `LinterGetOffensesFunction`, `RequestHeaders`, `PointerEvent`, `DomSanitizer`, `MemoryAppenderConfiguration`, `HoverProvider`, `ConstInterface`, `IProjectNode`, `RouteLocationRaw`, `IEntityError`, `PDFParser`, `CardCollection`, `STPCardValidationState`, `thrift.Thrift.Type`, `FormBuilderConfiguration`, `URLSearchParams`, `VpnPacketCaptureStopParameters`, `Express`, `IsString`, `RequestProfile`, `SubMeshRenderElement`, `ISceneObject`, `RDBType`, `RowTransformerValidator`, `formatLinkHeader.Links`, `CosmosBalance`, `XPCOM.nsIXULWindow`, `OpenSuccessCallbackResult`, `ImageRef`, `IndexOpts`, `ListTagsForResourceResult`, `LoadableMeta`, `Ent`, `MutateInSpec`, `NVMOperationsResponse`, `CompilerContext`, `DataListProps`, `IAnimatedCallback`, `ApolloServer`, `IWarriorInstance`, `AdaptMountedElement`, `CollectionStore`, `Navigation`, `apid.RecordedId`, `WorkerConfig`, `ConversationItem`, `TreeSelectionModificationEventArgs`, `SimulatorDatabase`, `ResolveRecord`, `PathParameterValues`, `IPipeable`, `BlockchainHandler`, `YieldFromNode`, `TimeTravel`, `RoleIndexPrivilege`, `ImportInterface`, `HTMLIonAlertElement`, `FeatureSettings`, `jest.DoneCallback`, `IAmazonApplicationLoadBalancer`, `MouseButtonMacroAction`, `LayeredLayout`, `ExpressServer`, `SecureHeadersOptions`, `DocOptArgs`, `Ringmodulator`, `MetricsPublisherProxy`, `FunctionNode`, `RenderOptions`, `Deal`, `Recursion`, `FilterFunc`, `LogSeriesFragment`, `ModuleWrapper`, `MessageToWorker`, `ComponentOptions`, `VoidFunction`, `JSONObject`, `RowType`, `GrabOptions`, `OsuSkinTextures`, `TheoryDescriptor`, `ITypeEntry`, `DOMException`, `FutureWallet`, `SteeringPolicyRule`, `ValueValidator`, `MonoSynth`, `ContainerOptions`, `ApolloReactHooks.LazyQueryHookOptions`, `CallSignature`, `EventDeclaration`, `ColonyNetworkClient`, `AttributeFilter`, `GroupingCriteriaFn`, `Gettable`, `FocusTrap`, `AggTypeState`, `CompleteGodRolls`, `MappedTypeGuard`, `ParseMode`, `IHubRequestOptions`, `StartedTestContainer`, `Metadata`, `IClientConfig`, `MergeIntersections`, `IndexedReadWriteXYZCollection`, `IJsonPatch`, `DataTableColumn`, `EventProxy`, `FlowItemComponent`, `SequenceConfig`, `CraftProjectConfig`, `FcConnector`, `ProgressDashboardConfig`, `FcException`, `AckFrame`, `Alarm`, `PythonCommandLine`, `Sandbox`, `UIFunctions`, `SearchAllResourcesRequest`, `DummySpan`, `AppDependencies`, `TeamType`, `Terminator`, `ValidPropertyType`, `AuctionManager`, `JsonAstObject`, `TCollectionSchema`, `OptimizeJsInput`, `ZoomBehavior`, `PublicationRepository`, `Constants`, `IGameContextValue`, `XTableRow`, `TrigonometryBlock`, `PutEmailIdentityDkimAttributesCommandInput`, `d.CompilerBuildStats`, `DescribeOfferingCommandInput`, `UserSimple`, `UserDTO`, `BreadcrumbPath`, `TestAccounts`, `TemplateCache`, `DatePickerDayDateSource`, `Sqlite.Statement`, `CreateChannelRequest`, `TreeAdapter`, `RoomUserEntry`, `TokenMap`, `UnitTestTree`, `DayGridViewWrapper`, `DataSeries`, `ComponentCompilerVirtualProperty`, `ModbusForm`, `Ceramic`, `TSESTree.Literal`, `CellOutput`, `ElectronShutdownCommandOptions`, `ApplicationVersionFile`, `Keypoint`, `ExtensionInfo`, `BytesValue`, `StoreReadSettings`, `ClassDeclarationStructure`, `RecommendationLifecycleDetail`, `LineLeaf`, `IScheduler`, `IPFSDir`, `GzipPluginOptions`, `TraceOptions`, `WithSerializedTarget`, `SchemaArgInputType`, `SelectorMeta`, `QueryManager`, `ManyToMany`, `ExtendedLayer`, `PrRepository`, `ILanguageState`, `EntityAttributes`, `InstanceLightData`, `UIImageView`, `MatPaginatorIntl`, `MediaPlayerState`, `SettingItem`, `Share`, `ICreateUpdateLanguageConfig`, `TextStringContext`, `GfxRenderTargetDescriptor`, `TwoFactorProviderType`, `BarcodeScannerConfig`, `LocalDirName`, `ListShapesRequest`, `Market`, `ServersState`, `CapsizeOpts`, `QueryCertificatesRequest`, `RoleRepresentation`, `AttributeInfo`, `SwingTwistSolver`, `DropLogFile`, `CustomPriceLine`, `SchemaKey`, `Reg`, `EditablePoint`, `babel.Node`, `DOMPointInit`, `Compact`, `StyleMap`, `TeliaMediaObject`, `NodeImmut`, `Income`, `SchemaResult`, `TextGeometry`, `Ticks`, `IsCommon`, `PrivKeySecp256k1`, `ListaTarefas`, `JobHandler`, `AsyncRequestHandler`, `TestBedStatic`, `Lock`, `DtlsServer`, `Instruction`, `CompilerHost`, `SortType`, `ListsState`, `SerializeImportData`, `DefaultRollupBlock`, `CompilerFileWatcherEvent`, `NgxsRepositoryMeta`, `request.OptionsWithUri`, `IconifyIconBuildResult`, `ConsoleAPI`, `ContentGroupProps`, `GovernanceMasterNodeRegTestContainer`, `LinearProgress`, `SelectItemDescriptor`, `JsonRpcHttpClient`, `LayoutCompatibilityReport`, `RenderPassDescriptor`, `ShaderSocket`, `CodeBuilder`, `ThemeColorable`, `vscode.TestController`, `IBazelCommandOptions`, `ListDataSetsCommandInput`, `INeonNotification`, `ast.UnaryNode`, `IconStorage`, `FilterRule`, `requests.ListDbVersionsRequest`, `EncryptedObject`, `RangeBucket`, `CallAgent`, `DynamoDbPersistenceAdapter`, `RestMultiSession`, `ChromeMessage`, `GQtyError`, `DisplayErrorPipe`, `VideoDownlinkObserver`, `Parsed_Result`, `BigSource`, `LocalNetworkDevice`, `Geolocation`, `BisenetV2CelebAMaskConfig`, `NetWorthSnapshot`, `ResultProgressReporter`, `DynamicFormValidationService`, `JOverlap`, `TodoListRepository`, `DocumentTree`, `ConfigOptions`, `SettingsState`, `DeployParams`, `RollupAggregator`, `RemoveEventListenerFunction`, `DynamicDialogConfig`, `FormFieldModel`, `DockerAuthObj`, `MIRInvokeFixedFunction`, `ResolvedRouteInfo`, `Handlebars.TemplateDelegate`, `Monad`, `ActivityComputer`, `ProjectChangeAnalyzer`, `DeleteBackupCommandInput`, `DataProxyAPIErrorInfo`, `ISlackPuppet`, `ListenerCallbackData`, `ISubImage`, `RoomInfo`, `UpdateCustomEndpointDetails`, `TransactionVersion.Testnet`, `ImageEncoder`, `Die`, `INode`, `EquipmentSharingPolicyService`, `SourcemapPathTransformer`, `QComponentCtx`, `ListrContextFinalizeGit`, `ContainerGetPropertiesResponse`, `HydrateImgElement`, `Deps`, `Properties`, `ReducerArg`, `QueryEntityKey`, `DocgeniHostWatchOptions`, `ContentRequestOptions`, `ThySelectionListChange`, `ChannelsState`, `AccountsContract`, `lambda.Function`, `CtrEq`, `UI5Namespace`, `Kind3`, `DialogStateReturn`, `ml.Attribute`, `SocketEvent`, `Parts`, `ISessionRequest`, `EntityCollectionDataService`, `GetRRSetRequest`, `IItemRendererProps`, `IRemote`, `M2ORelation`, `DashboardPlugin`, `IStoryItemChange`, `PixelType`, `Zoom`, `WebAssemblyInstantiatedSource`, `NavigationActions`, `HostComponent`, `EdaDialogCloseEvent`, `InternalFailureException`, `CliOutput`, `PutObjectRequest`, `GraphTxn`, `DaffCountry`, `iTickEvent`, `UseFormReturn`, `PathCursor`, `OwnPropsOfRenderer`, `ExchangePair`, `IListenerAction`, `HighlightRange`, `SerializedGame`, `BlockHash`, `MlClient`, `Behaviour`, `CommonMaterial`, `UnscopedEmitHelper`, `CodeMirrorEditor`, `RTDB.Get`, `BadRequestErrorInfo`, `FileSystemCache`, `PropertyCategory`, `HttpProbeMethod`, `LinkI`, `runtime.HTTPQuery`, `Contest`, `GroupConfig`, `PromiseJsExpr`, `TouchEvent`, `LetterSpacing`, `Serializable`, `BasePackage`, `CustomElementRegion`, `Color`, `SetBreadcrumbs`, `interfaces.Context`, `PetStoreProduct`, `ClassWriter`, `ListTagsForResourceMessage`, `EnhancedSku`, `CSSEntries`, `LockType`, `STWidgetRegistry`, `SpeculativeContext`, `ListTablesResponse`, `NodePrivacyLevel`, `Ext`, `ITerminalProvider`, `EventInterface`, `ThemeCss`, `Types`, `RTDB.Subscribe`, `GeoBox`, `BinarySensorType`, `SchemaBuilder`, `IRequestConfig`, `XUL.contentWindow`, `OrganizationPolicySummary`, `SuccessfulParsedMessage`, `ColumnFormat`, `AuthConfig`, `IconType`, `CosmosOperationResponse`, `FolderDoc`, `AppError`, `ExecutionContextInfo`, `IDeclaration`, `VerifyJwtOptions`, `Purse`, `PrefBranch`, `CreateFileOptions`, `LoginUriView`, `Linear`, `ComponentTypeEnum`, `WebViewExt`, `IScriptingDefinition`, `ast.IfNode`, `SourceIntegrationInterface`, `PipelineRelation`, `FlowCondition`, `ITextDiff`, `CeloTokenContract`, `VercelConfig`, `ViewData`, `FaunaUDFunctionOptions`, `PackageManager`, `Instrument`, `flatbuffers.Offset`, `IJSONInConfig`, `iAction`, `ConfirmChannel`, `PluginDeleteActionPayload`, `K4`, `FoamWorkspace`, `ResultT`, `RegisteredSchemas`, `WeakRef`, `HeatmapDataSets`, `IFactor`, `TranslatePropertyInput`, `AnimationChannelTargetPath`, `EvmType`, `Snapshot`, `ToplevelRecord`, `PostConditionMode.Deny`, `MemberNode`, `LastValues`, `UpdateType`, `RequestTemplate`, `CollectionConfig`, `ReferenceMonth`, `DataModels.Kpi.ActiveTokenList`, `PolusBuffer`, `ExpansionResult`, `ListDatabasesRequest`, `SiteClient`, `ResponseMessage`, `LoggerService`, `PluginPackage`, `CategorizationAnalyzer`, `YAMLMapping`, `NoncurrentVersionTransition`, `CategoricalColorScale`, `CellPosition`, `TPageConfig`, `d.TypesMemberNameData`, `Dependence`, `IPrompter`, `MessageDataOptions`, `ShadowboxSettings`, `ValuedRivenProperty`, `ResourcePropsWithConfig`, `TextureConfig`, `DictionaryFile`, `ContractManifestClient`, `ITranslationMessagesFile`, `IndexDiff`, `SuperAgentTest`, `TS.Node`, `CssRule`, `Simulate`, `DownloadRef`, `TaskItem`, `GlobalPropertyStruct`, `NodeCheckFn`, `SkeletonTextProps`, `Gen`, `ExtendedAreaInfo`, `UpdateChannelReadMarkerCommandInput`, `AsyncWaterfall`, `AuthType`, `a`, `EntityProperty`, `ExtractClassDefinition`, `VerifiedCallback`, `CalcObj`, `TexturedStyles`, `DMMF.TypeInfo`, `LogicalQueryPlanNode`, `Workspace`, `FixOptions`, `StringToken`, `ProfileX`, `ConstraintSolver`, `RegistryPolicyTemplate`, `SVGGElement`, `Loggable`, `TimelineActivity`, `StartServices`, `ThresholdedReLU`, `MessageToken`, `ModelData`, `H.Behavior`, `KibanaRequest`, `FilteredLayer`, `SystemVerilogImportsInfo`, `EntitySelectorsFactory`, `BemSelector`, `DeploymentDisconnectStatus`, `DeleteIntentCommandInput`, `tf.LayersModel`, `TwingSourceMapNode`, `IRenderDimensions`, `DirectedEdge`, `Mocha`, `BinaryTree`, `EncodedQuery`, `ICheckOut`, `FileDeleteOptions`, `ResourceCacheData`, `WorkspacePlugin`, `AggTypeConfig`, `SelectionsWrapper`, `ComponentsCompiler`, `BuildData`, `CameraController`, `LyricLanguage`, `ComparatorFn`, `SubdivisionScheme`, `Listenable`, `CalendarPatterns`, `ClientSubLocation`, `t_As`, `Screen`, `PiLimitedConcept`, `GaugeRenderProps`, `TableSeg`, `IGenericTaskInternal`, `BodyParser`, `DocumentGenerator`, `Selector`, `ListAutoScalingPoliciesRequest`, `StaticFunctor`, `OptionsMatrix`, `VRMSpringBone`, `GitUrl`, `SourceMap`, `Props`, `d.HydratedFlag`, `ExportedConfigWithProps`, `GraphDataProvider`, `NodeDependency`, `P1`, `LeafletElement`, `requests.ListZoneTransferServersRequest`, `RelationshipPath`, `Poker`, `Yendor.Console`, `PackageJsonOptions`, `EditPageReq`, `ISummaryContext`, `GfxQueryPool`, `Volume`, `OutputWriter`, `estypes.AggregationsAggregationContainer`, `IStatusView`, `OutputCollector`, `INormalEventAction`, `RenderBatchKey`, `I18n`, `DeleteRetentionPolicyCommandInput`, `ts.Map`, `ProgressMessage`, `ClientImpl`, `Angulartics2GoogleGlobalSiteTag`, `EnvProducer`, `STS`, `WebPartContext`, `SingleConsumedChar`, `AddressNonces`, `XActorRef`, `FunctionCallArgumentCollection`, `CustomText`, `GfxBufferBinding`, `HintResults`, `NzDrawerRef`, `Labware`, `FriendRequest`, `QuestService`, `RoutingRule`, `IPathfindersData`, `InstrumentationLibrarySpans`, `EntityOp`, `ListUserProfilesCommandInput`, `Powerlevel`, `MIRRegisterArgument`, `ethers.providers.JsonRpcProvider`, `LoadedTexture`, `Charge`, `ProgramCounterHelper`, `MagicOutgoingWindowMessage`, `TransactionEventType`, `AtRule`, `GlobalDeclaration`, `IEditEntityByMemberInput`, `NestedOptionHost`, `InfluntEngine`, `ClassRefactor`, `MetamodelService`, `ChartjsComponentType`, `PortObject`, `Guy`, `ValidationResultsWrapper`, `SwitcherItem`, `requests.ListCloudAutonomousVmClustersRequest`, `LLVMNamePointer`, `DalgonaState`, `DiagnosticsOptions`, `CreateAppointmentService`, `SecondaryIndex`, `FormType`, `SubscriberEntity`, `S3Source`, `EntityDefinition`, `PositionProps`, `NexeCompiler`, `MiddlewareFnType`, `SetState`, `MeetPortalAnchorPoint`, `GestureUpdateEvent`, `ImportData`, `GlyphSet`, `TimelineActivityKind`, `TableOptions`, `OutdatedDocumentsTransform`, `NodeI`, `BuildApiDecOpts`, `models.ISegement`, `TransactionResult`, `MinimalNodeEntryEntity`, `GetJobCommandInput`, `Path0`, `PanEvent`, `VfsEntry`, `IStorageSyncOptions`, `GenericAsyncFunc`, `AuthToken`, `SignedByDecider`, `StepChild`, `LinkedList`, `SymExp`, `DebugProtocol.ConfigurationDoneResponse`, `PropertyFlags`, `LoaderManager`, `RpcNode`, `TextureOverride`, `Pass1Bytes`, `WebpackAny`, `Loadable`, `ArgumentMetadata`, `ZipMismatchMode`, `PDFAcroPushButton`, `Toggleable`, `ICurrentControlValidators`, `Tally`, `TAction`, `FromViewOpts`, `AddAtomsEvent`, `TaskSchema`, `Node.Identifier`, `ArrayExpression`, `DropdownOption`, `ILoggedInUser`, `yubo.PlayOptions`, `d.MinifyJsResult`, `CheckboxValue`, `PinoLogger`, `AttributeType`, `CommonStyleProps`, `UserDto`, `ParsedQueryNode`, `FunctionParameter`, `OptiCSSOptions`, `TimeAveragedBaseRateOracle`, `ITokenInfo`, `NameObjFactoryTableEntry`, `VaultTimeoutService`, `ColumnBands`, `InputTypeComposer`, `SymbolWriter`, `OpenChannelObjective`, `AsyncIterableX`, `ChatError`, `ListServicesResponse`, `ArrowFunctionExpression`, `PlayerAggHistoryEntry`, `Record`, `AWS.CloudFormation`, `UserSettingsModel`, `btSoftBody`, `ClassExportDoc`, `StatsService`, `ICloudFoundryServerGroup`, `RequestCancelable`, `ApmPluginContextValue`, `IAresData`, `DMMFClass`, `Certificate`, `CreateHitTesterFn`, `TagEdit`, `VstsEnvironmentVariables`, `FetchEnd`, `DragDropIdentifier`, `ExecutionLogSlicer`, `WorkflowInputParameterModel`, `ContextMenuParams`, `FileBlock`, `AuditResult`, `ReturnModelType`, `ShoppingCart`, `Offsets`, `Country`, `OrderDirection`, `SignInPayload`, `TransitionDescription`, `CommonToolbarItem`, `SpaceService`, `Errno`, `InitTranslation`, `BabelPluginChain`, `RequestSuccessAction`, `rp.OptionsWithUrl`, `Batch`, `Debug`, `PathFragment`, `BitcoinTransactionInfo`, `BSplineCurve3dH`, `CombatVictorySummary`, `Note`, `CrochetCapability`, `d.LogLevel`, `TrueGold`, `ImageService`, `IBranch`, `PumpState`, `DemoService`, `Rule.RuleListener`, `_IRelation`, `DoorFeatureType`, `CommandReturn`, `IAdministrationItem`, `ResourceComponent`, `ExecutableSpec`, `KeyBindingCommandFunction`, `SelectedState`, `EngineArgs.DevDiagnosticInput`, `BookmarkHelperService`, `AST.SubExpression`, `DocumentConnectionManager`, `NamePosInfo`, `ObjectAllocator`, `ColorHelper`, `ProjectedDataItem`, `Math2D.Box`, `ReleaseDefinitionSchema`, `IPageNode`, `RegularizationContext`, `Closeable`, `WithStringLiteralProperties`, `ApplicationMetadata`, `ExtensionDefinition`, `CasesClientArgs`, `RevealConfig`, `IJSONResult`, `EntityMapEntry`, `AppModels`, `StreamQuery`, `gameObject.Bullet`, `JsonDocsTag`, `VFileCompatible`, `StatusBarWidgetControlArgs`, `cPhs__Status`, `Speaker`, `FakePlatform`, `AuthenticatedUser`, `HttpResources`, `Placement`, `BaseConfig`, `t.STSelector`, `DescribeEventAggregatesCommandInput`, `AccountTransfersService`, `Crypto`, `OrthogonalDirection`, `FormatCodeSettings`, `ActionCallback`, `RespondersThemeType`, `LayerState`, `ConfigurableEnumConfig`, `DevicesButtonStrings`, `Config`, `TestReport`, `Router.RouterContext`, `MultiChannelAssociationCCRemove`, `MediationRecord`, `ElectrumNetworkProvider`, `IFBXLoaderRuntime`, `PythonDependency`, `CarouselProps`, `ShapeModel`, `TagNode`, `Regularizer`, `ITestsService`, `api.IZoweDatasetTreeNode`, `Insight`, `IconTheme`, `CountableTimeInterval`, `sinon.SinonSandbox`, `OutputFormat`, `CloudWatchDestination`, `ImportInterfaceWithNestedInterface`, `UntagResourceRequest`, `CallAndResponse`, `PromiseFast`, `JsxClosingElement`, `AsyncThunk`, `WebGL`, `RicardianContractProcessor`, `ForkEffect`, `RenderState`, `BottomSheetOptions`, `ModelInterface`, `ObservableQuery`, `CustomHttpResponseOptions`, `TaskRecord`, `ProxySide`, `CarouselService`, `TSpan`, `CreateScope`, `CourseState`, `DeleteGatewayCommandInput`, `Sample`, `ShapeGeometry`, `PiTriggerType`, `CreateFilterCommandInput`, `SavedObjectsCreatePointInTimeFinderDependencies`, `PieceAppearance`, `Segment3`, `requests.ListApplicationsRequest`, `TypingGenerator`, `ActionState`, `DOMExplorerDashboard`, `CeloTxReceipt`, `IShadowGenerator`, `ColumnMapping`, `ColumnSchema`, `PostToken`, `FunctionType`, `TypeAttributeKind`, `SwitcherResult`, `ActiveMove`, `ListComprehensionIfNode`, `ColonyExtensionsV5`, `VMContext`, `ReadTarball`, `PointerInfoBase`, `Resolvable`, `CreateWidgetDto`, `CardType`, `Jws`, `UpdateRouteCommandInput`, `S2Options`, `CommandResult`, `IGBPackage`, `BaselineResult`, `EToolName`, `BitmapFont`, `ThenableReference`, `BRepGeometryCreate`, `DomainPanel`, `PolicyContext`, `ClozeDeletion`, `Transaction.Options`, `CompilerInput`, `ContextName`, `SourceRootInfo`, `XYLayerConfig`, `ListenerCallback`, `ILexoNumeralSystem`, `UserMatched`, `LayerValue`, `JoinGroupRequest`, `DataRequestMeta`, `EditorOptions`, `FileMetaData`, `ControlState`, `TranslationLoaderService`, `CustomMerge`, `PrevoteMessage`, `JWKInterface`, `WideningContext`, `OpenApiSpec`, `HoverSettings`, `NixieEquipment`, `GenericMonad`, `SecondaryUnit`, `FullName`, `VirtualKey`, `EitherExportOptions`, `HsLogService`, `LunarInfo`, `SpeechConfig`, `d.HostRef`, `Cipher`, `requests.ListAcceptedAgreementsRequest`, `NetworkType`, `DMChannel`, `Oas20Parameter`, `ResourceQuota`, `WorkerResult`, `App.windows.window.IXulTrees`, `IPayloadAction`, `SingleKeyRange`, `IAPProduct`, `ISearchSource`, `ImageSource`, `CustomTemplateFindQuery`, `UpgradeDomain`, `TypePath`, `EventNameFnMap`, `AutoCompleteProps`, `Plyr`, `EnvironmentSettings`, `IGherkinOptions`, `SelectionModel`, `AnyClass`, `GetUserSuccessPayload`, `BoxBuffer`, `IRangeResponse`, `ColorInputProps`, `DistinctOptions`, `TransformHeadersAgent`, `ClipboardService`, `Descriptions`, `LinkedEntry`, `TAny`, `QualifiedOTRRecipients`, `StateStore`, `IGeometryProcessor`, `PhysicalKeyboardKey`, `RoamBlock`, `Tristate`, `TrackGroupIndex`, `Bleeps`, `AList`, `CalendarEventStoreRecord`, `LogicCommand`, `LogBoxLayout`, `StaticEllipseDrawerService`, `UpdateAlbumDto`, `HTMLStencilElement`, `ClientErrorResponse`, `MinAdjacencyListDict`, `InstallOptifineOptions`, `TArgs`, `RequestPopupModelAction`, `OptimizedSubSetKey`, `T.Position`, `RendererFactory3`, `DescribeGlobalClustersCommandInput`, `ShellOptions`, `MainPackage`, `IApplyJobPostInput`, `GetAccountSettingsCommandInput`, `ListConfig`, `PrismaClientClass`, `PluginCallbacksOnSetArgument`, `ODataPathSegments`, `LanguageVariant`, `ObservableProxy`, `JSDocState`, `OtherInterface`, `GlobalSearchResultProvider`, `IApplicationOptions`, `optionsInfo`, `BrowserPlatformUtilsService`, `CustomNode`, `AssetManifest`, `EmojiListObject`, `MIDIControlListener`, `AggregationResultMap`, `LoaderProps`, `ForceDeployResultParser`, `FrontendLocaleData`, `WellKnownTextNode`, `DraftHandleValue`, `DescribeDeviceCommandInput`, `ISwissKnifeContext`, `FeaturesDataSource`, `ITimezoneMetadata`, `CloudKeyStorage`, `ValueValidationFunc`, `IApp`, `XmlMapsCommandInput`, `HypermergeNodeDetails`, `ViewEntityOptions`, `IHandlers`, `GetDomainCommandInput`, `DriverMethodOptions`, `CartesianChart`, `ConfigStore`, `ValidationErrors`, `CipherCreateRequest`, `RemoveTagsFromResourceCommandOutput`, `TagDescription`, `Utf8ToUtf32`, `ValuesProps`, `ResolvedTupleAtomType`, `LastInstallFlag`, `BitWriter2`, `IObservableObject`, `IMethodHandler`, `Thrown`, `EvaluationScopeNode`, `MVTFieldDescriptor`, `StrategyParameter`, `IMutableVector2`, `EMSTermJoinConfig`, `INodeIssues`, `IDraggableList`, `ObjectDefinition`, `CookieParseOptions`, `LengthPrefixedString`, `FunctionBreakpoint`, `React.ReactChild`, `PortalInjector`, `RpcServerFactory`, `DropdownOptions`, `DynamicIndentation`, `MDXRemoteSerializeResult`, `EquipmentSharingService`, `DisplayCallbacks`, `ConstraintMember`, `SVFloat`, `sinon.SinonSpyCall`, `ParamWithTypeMetadata`, `MetricDimensionDefinition`, `KubeArgs`, `MetadataTypeGatherer`, `HandlerDelegate`, `ArrayLiteral`, `LockOptions`, `ListTranscriptionJobsCommandInput`, `HierarchyOfArrays`, `AttachedModule`, `ApiDefinition`, `MenuModelConfig`, `InboundMessageContext`, `ColorResolvable`, `GDIContext`, `FieldMap`, `GfxSamplerBinding`, `RoomClient`, `Prediction`, `DateFnsInputDate`, `AddressProtocol`, `TrueSkill.RankState`, `ArrayOrSingle`, `StateManager`, `TestAccount`, `NumberShape`, `KontentHttpHeaders`, `PostDocument`, `DOMElementType`, `IPageHeader`, `requests.ListCatalogsRequest`, `ServiceKeyType`, `FormRenderProps`, `IMembership`, `TileMapArgs`, `StableTokenWrapper`, `SelectMenuItemProps`, `UniformsType`, `CodeEditor`, `HoverTarget`, `EffectsInvocationContext`, `AppProduct`, `SankeyDiagramLink`, `CreateChildSummarizerNodeFn`, `OptionedValueProp`, `StateEither`, `RxFormGroup`, `SplitStructureAction`, `IResponseAggConfig`, `Measurement`, `RecordingTemplate`, `AutocapitalizationInputType`, `WorkerOptions`, `userData`, `NcTemplate`, `CommonCrypto`, `PermissionOverwrite`, `RequestObject`, `IntrinsicType`, `DxTemplateHost`, `Phaser.Scene`, `BytecodeLinkReference`, `indexedStore.Store`, `OnboardingService`, `GaiaHubErrorResponse`, `ArangoSearchView`, `CdkVirtualScrollViewport`, `CreateGroupResponse`, `IsvDebugBootstrapExecutor`, `ISearch`, `Trilean`, `MeasureSpecs`, `IALBListenerCertificate`, `TTargetReference`, `ItemRequest`, `messages.Rule`, `ColumnMetadata`, `NexeFile`, `FilePreviewDialogRef`, `RoomStoreEntryDoc`, `GreetingWithErrorsOutput`, `UnwrappedObject`, `Manager`, `MotionChartData`, `TreemapSeriesOptions`, `FilterResult`, `EchartsProps`, `tf.Tensor4D`, `WidgetModel`, `DefaultSession`, `VMLClipRectObject`, `ITestConfig`, `IniFile`, `InteriorInternal`, `CredentialPreviewAttribute`, `LoginState`, `NodeModule`, `ApiItemContainerMixin`, `GfxrRenderTargetID`, `ParserOutput`, `OrderByDirection`, `DebugContext`, `ConsoleLike`, `Slice`, `TabbedAggResponseWriter`, `IClassExpectation`, `TopUpProvider`, `SVGElement`, `CacheStorage`, `AppMenuItem`, `StackElement`, `PositionChildProps`, `LinearFlowFunction`, `IDropdownOption`, `FatalErrorFn`, `Figure`, `ShadowAtlas_t`, `requests.ListCategoriesRequest`, `GfxInputLayout`, `Semiring`, `Width`, `ComplexNode`, `sdk.Connection`, `RawTextGetter`, `IdentifierListContext`, `CloudfrontMetricChange`, `FeedbackContextInfo`, `IndexPattern`, `CacheContextContract`, `ColorRulesOperator`, `JustValidate`, `ISerializedResponse`, `ComputedStateCreationOptions`, `IResults`, `OP`, `ReverseIndex`, `DestinyInventoryItemDefinition`, `BufferMap`, `ParsedGenerator`, `TraversalContext`, `ProtocolRequest`, `ITfsRestService`, `IConnectionsIteratorOptions`, `WorkspaceSchema`, `LengthUnit`, `VariantFunction`, `UpdateIntegrationResponseCommandInput`, `PatternSequenceNode`, `InternalCallContext`, `IGuardResult`, `ShellComponent`, `VorbisDecoder`, `thrift.Int64`, `ObservableTitleTopBar`, `SongResult`, `AccountClient`, `DynamicValue`, `BaseText`, `SfdxFalconInterview`, `AvatarCustomization`, `PartyService`, `Pixel`, `MulticallClient`, `SnotifyToast`, `CustomersService`, `AttributeViewInfo`, `EntityCollectionReducers`, `SignatureInformation`, `DropType`, `AdamOptimizer`, `ClassField`, `IProjectConfig`, `OperationRequest`, `RibbonButton`, `IndexColumnModelInterface`, `Tool`, `AnyGradientType`, `ExpBool`, `ClassBody`, `ProjectTaskProperties`, `ClaimData`, `VMoneyOptions`, `ContractCalls`, `Design`, `Themes.Theme`, `NoiseModule`, `MapboxMap`, `PayloadDictionary`, `DependencyResolved`, `AnalyzeOptions`, `Navigator`, `StoredItem`, `DataLabelOption`, `egret.DisplayObjectContainer`, `GameSize`, `TextInputOptionProps`, `DAL.KEYMAP_ALL_KEYS_UP_POS`, `chalk.Chalk`, `LanguageSettings`, `CompilerProvider`, `RecognizerConfig`, `EntitySubjectStore`, `GMxmlHttpRequestEvent`, `Functor3`, `AsyncStepResultGenerator`, `SearchClient`, `TermSet`, `IChatJoinProperties`, `Location`, `CheckState`, `ProjectedEdge`, `PartitionOptions`, `Referenced`, `AffineFold`, `CreateDemandDTO`, `DbPush`, `CssClassMap`, `AXNode`, `SlidingWindow`, `StateMap`, `VinVout`, `ControlButtonProps`, `MetaDataCollector`, `LazyScope`, `Polymer.Element`, `ImportDeclarationStructure`, `ITaskChainFn`, `AreaProps`, `DGuard`, `QueueConfiguration`, `vfs.FileSet`, `VerificationGeneratorDependencies`, `ModifierFlags`, `ProtobufValue`, `VAF1`, `TransferOffchainTx`, `IDocumentStorageService`, `requests.DeleteProjectRequest`, `UsePaginatedQueryMergeParams`, `T.MachineContext`, `sourceTextureFormat`, `RegistrationForm`, `ExtractionResult`, `RenderMethod`, `AnalysisConfig`, `TileStyle`, `StatusState`, `ITour`, `GLTF.IAccessor`, `GridNode`, `AssertionTemplateResult`, `ReplacementBuilder`, `FileEditAction`, `FaunaId`, `MySQLConnection`, `FzfOptions`, `AccessRuleCriteria`, `AccountStellarPayments`, `OrganizationUnitDto`, `GX.ColorSrc`, `IUploadResult`, `ParseErrorCode`, `THREE.Raycaster`, `Vector3`, `LoaderConfig`, `SalesSearchOptions`, `SelectionEvents`, `DispatchProps`, `SyncedActivityModel`, `Stock`, `IMdcRadioElement`, `ThyDragDirective`, `FiberRoot`, `VisualizeEmbeddableFactory`, `IngredientOrResult`, `SsrcDescription`, `DocumentOptions`, `MetricUnit`, `RenderingDeviceId`, `ILecture`, `GLTFLoader`, `MpqFile`, `ITarget`, `RemoteBaseMock`, `vile.IssueList`, `mat3`, `RestServerConfig`, `IActionTrackingMiddleware2Call`, `ComponentRuntimeMeta`, `AssembledPipelinesGraphics`, `MessageThreadStrings`, `AccountsOperationIO`, `AuthTokenRequestSigner`, `OverloadedFunctionType`, `NzDrawerService`, `DeclarationBase`, `ScoreDoc`, `MultipartFile`, `StateMapper`, `FkDstrGridData`, `MassetDetails`, `EmitHint`, `EsHitRecordList`, `WildlingsAttackGameState`, `d.ConfigBundle`, `PointToPointLine`, `CronJobOptions`, `Composition`, `QRFunction`, `EmptyEventCreator`, `BlockSyntaxVersion`, `FetcherContext`, `TransformOrigin`, `MarketDataProvider`, `InstanceWithExtensions`, `RemoteConfigTemplate`, `VaultStorageService`, `FilesService`, `FirebaseError`, `ZoneSwitch`, `DataServiceError`, `ts.PostfixUnaryExpression`, `Mustering`, `TerminalState`, `TypeReference1`, `MatchExecutor`, `MLKitVisionOptions`, `mixed`, `ModelName`, `AtemConfiguration`, `BufferColumn`, `LockTime`, `WindowFrameName`, `KeySet`, `IsSpecificRowFn`, `PluginsClient`, `MetaQuestion`, `CliProfileManager`, `StreamingCmd`, `PSIInteger`, `Pallete`, `AngularFireUploadTask`, `DescribeResourcePolicyCommandInput`, `CallErrors`, `ReflectContext`, `MapViewInset`, `ThemeSetup`, `NodeList`, `Electron.IpcRendererEvent`, `TickFormatterOptions`, `MapShape`, `ActionsList`, `RentalService`, `NodeRecord`, `ResizerMouseEvent`, `NodeRpcService`, `TableInfo`, `UserStateService`, `WhereExpressionBuilder`, `S1GRDAWSEULayer`, `PiePoint`, `IRCMessage`, `RtpPacket`, `TerraformAuthorizationCommandInitializer`, `KeyAlgorithm`, `IObservableArray`, `ThemeInterface`, `SectionDataObject`, `INetworkInfoFeature`, `BarycentricTriangle`, `RepositoryCommonSettingType`, `Value.Of`, `InvalidState`, `PatternEnumPropertyOption`, `InputsType`, `MigrateStatus`, `CalendarMode`, `CallHierarchyDefinition`, `ActionPayload`, `ListCustomVerificationEmailTemplatesCommandInput`, `CompilerSystemRealpathResults`, `UserInterests`, `RotationSettings`, `KeyStop`, `SimpleType`, `DesignTimeProperty`, `JupyterKernel`, `ThyOptionComponent`, `SelectableDataPoint`, `PluginOpaqueId`, `PanGestureEventData`, `AwsCloudProvider`, `ExecaSyncReturnValue`, `TinyTranslatorService`, `ObjectOrArray`, `IFB3DOM`, `LambdaDataSource`, `ITimesheet`, `SHA256`, `GraphqlData`, `IIssue`, `eventInterface`, `EventListenerOrEventListenerObject`, `QuotePreference`, `SelectOptions`, `WeakSet`, `JSDocVariadicType`, `AsyncFrameworkFn`, `ethereum.Event`, `IColorSet`, `CurrencyObject`, `FileWatcherEventHandler`, `AddressRecord`, `USER`, `TodoDataService`, `ComponentSize`, `DisplayValue`, `ClassMap`, `ObjectValidator`, `PackageJsonLookup`, `ImmutableObjectiveGroup`, `SymbolTickerOrder`, `AuthType.Sponsored`, `MapScalarsOptions`, `TestERC721Token`, `AliasesMeta`, `JsonApiDocument`, `DescribeUserCommandInput`, `Eci`, `Pswp`, `FreezerContract`, `GQLEventSearchResultSet`, `ByteData`, `IMainState`, `TestAdapter`, `StoryLabel`, `SearchStrategyRequest`, `IAureliaComponent`, `CreateWorkspaceCommandInput`, `SinonStubbedInstance`, `SegmentClient`, `PageTemplate`, `ChartDataSet`, `ISettings`, `PromptOptions`, `ConfigurableFocusTrapConfig`, `TestSuite`, `TaskReport`, `PyrightJsonDiagnostic`, `DateTimeOffset`, `SurveyObjectItem`, `GroupRepresentation`, `WebSiteManagementModels.SiteConfigResource`, `HeaderMapManipulator`, `IGeneralFunctions`, `DatabaseInfo`, `CreateAssetProps`, `HandlerDomProxy`, `vscode.NotebookCell`, `StorageCacheService`, `ExportTraceServiceRequest`, `InventorySocket`, `RhoProcessor`, `StopInstanceCommandInput`, `ICordovaAttachRequestArgs`, `HuffmannNode`, `HasFancyArray`, `MinorEvent`, `DefaultValue`, `GraphicsLayerOptions`, `SavedSearchSavedObject`, `RenderResult`, `BoolArray`, `ElmType`, `ContainerAdapter`, `IWrappedEntityInternal`, `ArchiverError`, `GetAuthorizationTokenCommandInput`, `ApiExperiment`, `FnN5`, `SceneActor`, `IterableChangeRecord`, `ChromeStart`, `LanguageInfo`, `BlockDefinition`, `Answerable`, `AuthRequest`, `FormLabelProps`, `Persister.IPersist`, `SideNavComponent`, `SynthesisContext`, `Points`, `SpriteFrame`, `GoogleDriveSyncMetadata`, `Metric`, `Pooling1DLayerArgs`, `TCommand`, `CommonAlertParams`, `IntermediateTranslation`, `WrapExportedEnum`, `GDQOmnibarListItemElement`, `AssetWithMeta`, `DaffCategoryPageLoadSuccess`, `AddressSpace`, `TranspileResults`, `IconDefinition`, `LineView`, `QLabel`, `ChartState`, `GitBranchReference`, `BuildNoChangeResults`, `GX.FogType`, `SocketMessage`, `DynamicInputModel`, `FSM`, `StatePropertyAccessor`, `TextSpan`, `SCN0_Camera`, `IVector2Like`, `DatabaseType`, `BitcoinNetwork`, `TNew`, `ZAR`, `DataItems`, `Left`, `GetDomainDeliverabilityCampaignCommandInput`, `CGPoint`, `IEventSubscription`, `Scanner`, `BookingService`, `OpenYoloInternalError`, `CharList`, `WorkspaceSettings`, `DataKeyTypes`, `AddressChainType`, `OAuthAuthCode`, `ConnectedSpaceGraphics`, `GlobalStyleComponent`, `InitState`, `LanguageServerInterface`, `ClothingProps`, `DataPumpExcludeParameters`, `MiddleColumnPadCalculator`, `ViewQueriesFunction`, `IconProp`, `Arrayable`, `StarPiece`, `LayoutChangeEvent`, `TReturn`, `QRCodeNode`, `PrStatistics`, `CpuInfo`, `HsLayerManagerService`, `HasAttributeExpr`, `ManifestBuilder`, `PromptModule`, `Migration`, `RestEndpoint`, `PaginateConfig`, `ParsedColorValue`, `SerializationOptions`, `TextPlacements`, `UnitNormArgs`, `VerifyConditionsContext`, `Degrees`, `LayerPanel`, `TestCallback`, `OhbugConfig`, `EntitySchemaField`, `EntityActionFactory`, `SqlHelper`, `I18nFeature`, `SavedObjectReference`, `ServiceModule`, `ApiRx`, `DescribeCodeReviewCommandInput`, `ServerClient`, `HighContrastMode`, `BackendService`, `IRead`, `PersistedStore`, `ICompileOptions`, `RuntimeEnvironment`, `LogStackedLayout`, `SpriteBaseProps`, `IParticleSystem`, `LegacyAPICaller`, `OutputPort`, `TXReport`, `DispatcherEmitter`, `PrepareOptions`, `ThemeSettingsBreakpointAny`, `DockerFacade`, `JSDocNullableType`, `SecureNoteData`, `SpotMarketConfig`, `EncoderOptions`, `UpdateClusterResponse`, `FilterExpression`, `StorageManagementClient`, `ContractAddresses`, `LiteralTypeNode`, `Readme`, `DisplayOptions`, `SystemManagerImpl`, `egret.Shape`, `RedisStore`, `TestingWindow`, `LitvisDocument`, `VideoType`, `HubProduct`, `ModelCache`, `CSC`, `NotificationRequest`, `Statistics`, `WriteOptions`, `RTCRtpSendParameters`, `Try`, `OpenSearchDashboardsReactNotifications`, `DocgeniHost`, `RBNFSetBase`, `ListenerOptions`, `Locatable`, `PureTransitionsToTransitions`, `GfxInputLayoutP_GL`, `CreateWebACLCommandInput`, `PrivateAuthenticationStore`, `ProfileState`, `MikroORMOptions`, `IUserDto`, `GeomGraph`, `TransactionsBatch`, `ConfigHandler`, `MatchedPointType`, `AppTheme`, `LogObject`, `ViewportCallback`, `DateFormat`, `LiveMap`, `EntityCompanionDefinition`, `CorrelationIdGenerator`, `ELEMENT`, `IVConsoleNode`, `KCDLoader`, `IBooleanFlag`, `StackItemLike`, `PositionNode`, `Diffs`, `NotificationDataOptions`, `DomainBudget`, `PreKeyBundle`, `SCN0_LightSet`, `IChoiceGroupOption`, `ts.ForInStatement`, `TimeTrackingEntryIded`, `UrlWithStringQuery`, `VariableStatement`, `ObjectContext`, `Gui.Widget`, `AttributeReader`, `IVirtualDeviceValidatorResultItem`, `RQuota`, `NavigationProps`, `CreateIdentityProviderCommandInput`, `RestResponse`, `IdempotentParameterMismatchException`, `CapacityReservation`, `ChannelUpdateMessage`, `FieldRenderProps`, `IMemoryDb`, `MapFnOrValue`, `JSONSchemaSettings`, `IDeltaManager`, `FriendshipPayload`, `VTTCue`, `TypingsData`, `React.FormEventHandler`, `InstantComponentTransformation`, `NearestPointOnLine`, `ApiDoc`, `SimpleItemPricing`, `Pagerow`, `MappingItem`, `SelectorItem`, `TriumphRecordNode`, `OpenSearchDashboards`, `KeyCurve`, `Point3d`, `SeriesParam`, `Index`, `MarkdownService`, `HTMLTableElement`, `ToRunType`, `ReadModelStore`, `Array2DHashSet`, `VisualizeInput`, `FileWriter`, `TransactionInfo`, `ViewStateProps`, `ItemValue`, `Dishes`, `AddApplicationReferenceDataSourceCommandInput`, `SearchConfig`, `PlayerId`, `PopupDispatcher`, `ModifyEventSubscriptionResult`, `Attr`, `ALObjectWizardSettings`, `InstanceSummary`, `UtilityService`, `IBsLoadingOverlayOptions`, `MatRadioButton`, `DocumentSelector`, `TestingProject`, `ProviderItem`, `ResizeOptions`, `HistoryQuery`, `FilterMetadataStatusValues`, `BlockType`, `RepositoryRepository`, `EditorInspectorService`, `SessionRequest`, `FormBuilderService`, `Repeat`, `IAmazonNetworkLoadBalancerUpsertCommand`, `GherkinDocument`, `Quaternion`, `LangType`, `DataLayout`, `RumInitConfiguration`, `MinionStatus`, `DAL.DEVICE_ID_DISPLAY`, `PullState`, `SpacePropValues`, `Moon`, `VisibilityGraph`, `ClearingHouse`, `Benchmark.Event`, `FilterHeadersStatusValues`, `RedirectTask`, `IPackagesService`, `SpriteService`, `ConversationRecognizer`, `AutoconnectConfig`, `IAppDef`, `UpdateVpcLinkCommandInput`, `GLfloat2`, `BodyInit`, `StagePanelsManager`, `IParallelEnumerable`, `ifm.IHeaders`, `FeeType`, `postcss.Container`, `AzureNamingServiceOptions`, `request.Response`, `SymbolVisibilityResult`, `LngLatBounds`, `DBusClient`, `EventOptions`, `DataSourceService`, `MatrixItem`, `ErrorToastOptions`, `SplitAreaDirective`, `SelectionState`, `ConfigRuntime`, `HTMLIonContentElement`, `IWorkflowDb`, `ActionTicket`, `IUIField`, `MultiRingBufferReadableStream`, `QueryLanguage`, `SparqlItemService`, `AsyncQuery`, `TreeViewExpansionEvent`, `RadioGroup`, `SliderState`, `CommerceLayerConfig`, `DebugProtocol.AttachResponse`, `BuildPageRangeConfig`, `apid.RuleSearchOption`, `PokemonSet`, `ElementGeometryResultOptions`, `SubscribeResult`, `callback`, `CustomState`, `Next`, `Ids`, `OcsShare`, `ReplyChannelRangeMessage`, `IdType`, `BinaryValue`, `FieldView`, `OpenAPIParser`, `DescribeServicesCommandInput`, `FaastError`, `BrowserWindowRef`, `dia.Paper`, `SWRInfiniteKeyLoader`, `TerminationStrategy`, `AddArrayControlAction`, `SDKBase`, `Boolean`, `SwitcherItemWithoutChildren`, `Nuxtent.Query`, `TileDescriptor`, `ApplicationCustomizerContext`, `SpeechContext`, `TaskEither`, `BinaryExpression`, `IMyOptions`, `SplitTest`, `LineString`, `SVSize`, `TileMeta`, `NavigationTreeViewModel`, `PaymentsError`, `NodeInstance`, `ITranslation`, `ModuleNode`, `DeleteObjectCommandInput`, `CreateListenerCommandInput`, `ISimpleGridEdit`, `ZeroXPlaceTradeDisplayParams`, `SelectorCache`, `PlotAreaOptions`, `TypeElement`, `JsonRpcSigner`, `KanbanSplitResult`, `DecoderResult`, `TimeBucketsConfig`, `IgnoresWrappingXmlNameCommandInput`, `KeyType`, `apid.ProgramGenreLv1`, `QuicTags`, `DataMapper`, `EfParticle`, `OncoprintModel`, `FileSystemAdapter`, `Watching`, `virtualFs.Host`, `MaestroTipoModel`, `CoreDependencies`, `MapPartsShadowType`, `ClientLibraryState`, `ClassNameCollector`, `LocalFileName`, `FileUpload`, `DbList`, `CompleteLayerUploadCommandInput`, `OrganizationConfig`, `alt.Entity`, `WebHook`, `ForgedResponse`, `GX.IndTexBiasSel`, `DiffColumn`, `OnCameraFrameCallbackResult`, `CertificateDTO`, `tf.Tensor3D`, `InjectedIntl`, `EventTypes`, `AgentPolicy`, `ILocale`, `ArgonWebView`, `IdeaTags`, `EPerson`, `PlaybackState`, `IItemUpdateResult`, `IInstantiationService`, `EventBuilder`, `ISolutionWithFileIds`, `CompositeCollectionJavaIterator`, `ErrorCorrectionLevel`, `CameraContext`, `IWhitelistUserModel`, `ts.AnyObject`, `TextChunk`, `BemToBlockClassMap`, `DateHelperService`, `CkbTxInfo`, `UniListItem`, `PendingSuiteFunction`, `TexChunk`, `Suite`, `SelectedGroups`, `SonarQubeMeasureResponse`, `AuthenticationHeaderCloud`, `ResolvedGlTF`, `ConnectListener`, `IForwardIterator`, `SelectionMode`, `FilterStateStore`, `MdcDialogConfig`, `d.TransformCssToEsmInput`, `CDPSession`, `IRawOperationMessage`, `DeleteWorkspaceCommandInput`, `FilterComponentSettings`, `DialogSource`, `PlasmicLoaderConfig`, `msRest.HttpRequestBody`, `RxjsPipeline`, `CdkTableDataSourceInput`, `DataDirection`, `SequenceExpression`, `FunctionToMemoize`, `PrimaryFeaturePrivilege`, `TCacheKey`, `ScreenState`, `ResourceFetcher`, `s.CodeGeneratorRequest`, `HdPrivateKey`, `TSFile`, `RumPerformanceEntry`, `VertexAnimationEffect`, `HierarchyParents`, `MaybePatterns`, `ActionsService`, `ElasticsearchBoolQueryConfig`, `GrabListener`, `SchemaProvider`, `RARCFile`, `Matrix2x3`, `AWS.DynamoDB.DocumentClient.Key`, `OpenSeaPort`, `DesugaringContext`, `SessionOnDisk`, `SavedObjectsClientProvider`, `FieldFormatter`, `TypedDictEntry`, `UpdateDistributionCommandInput`, `ListSourceApplicationsRequest`, `AnimatedClock`, `IComponent`, `TypeAnnotationNode`, `UserMedia`, `UrlPropertyValueRenderer`, `PublicAppInfo`, `NZBResult`, `IMyDate`, `AbiOwnershipBody`, `Air`, `DeviceClass`, `IntersectionObserver`, `TrackingInfo`, `DirectSpiral3d`, `LoginSuccessCallbackResult`, `ListServiceQuotasCommandInput`, `ts.GetAccessorDeclaration`, `AddTagsInput`, `ClassifyService`, `TokenFlags`, `RoleContext`, `ScopeMap`, `ChannelEthContract`, `AutoImportSymbol`, `SpreadAnalysisResult`, `ChunkContentCallbackArgs`, `DrawerControl`, `DecodedToken`, `ClientStatus`, `GithubGlobalConfig`, `IInputList`, `DefineComponent`, `UITextPosition`, `PrivateUser`, `CombinedJobWithStats`, `ReduceArguments`, `IStateContext`, `FramerAPI`, `Dialog`, `IExecutionContextProvider`, `Affect`, `FrameEntryType`, `ImagePipeline`, `IntegrationTenantService`, `Articles`, `RenderRule`, `TypedAxiosResponse`, `IPlugin`, `ContributionProposal`, `StartStopSingle`, `IndexerManagementModels`, `TransferDetails`, `ACLService`, `IParameterDefinition`, `ValidationRuntimeOptions`, `TLE.FunctionSignatureHelp`, `UnusedAttribute`, `FilterBuilder`, `SupervisionContext`, `ConversionType`, `RequestApproval`, `SnackbarType`, `ResolutionConfig`, `Aggs`, `FadingFeatureParameters`, `THREE.OrthographicCamera`, `MessageResp`, `ISetCategoricalFilter`, `LegendPosition`, `THREE.WebGLRenderTarget`, `FTPResponse`, `SuiLocalizationService`, `CiBuildInfo`, `RangeFieldMeta`, `RenameModuleProvider`, `UnformattedListItem`, `HealthCareApisClient`, `PermissionOverwrites`, `ProjectLocale`, `IAssetTag`, `ProductControlState`, `ObjectId`, `SugiyamaLayoutSettings`, `InterleavedBufferAttribute`, `DeleteDatasetResponse`, `ViewerEventType`, `ITagsState`, `CurrentMoveInfo`, `TouchList`, `IMask`, `RecurringBill`, `CSSStyleDeclaration`, `HomogeneousPatternInfo`, `TransitionDoneFn`, `SubscriptionDiagnosticsDataTypePriv`, `ConnectController`, `WorkspaceConfiguration`, `GenericType`, `OauthSession`, `IProfile`, `VertexDeclaration`, `TFLite`, `LinearGradient`, `ProcessQueue`, `Peer`, `ClassAndSelector`, `EPerspectiveType`, `TableState`, `Projector`, `ts.TransformationContext`, `RangeContext`, `Decorations`, `TimeSeriesMetricDataPoint`, `ICustomFunctionParseResult`, `DateRangeInputProps`, `HSD_TEArg`, `ForwardingConfig`, `RpcRouter`, `App.storage.IStorageApiWrapper`, `Chart`, `CallbackEntryHelper`, `BooleanNode`, `SGroup`, `Patch`, `ElasticsearchConfigType`, `QueueModel`, `IVisibilityJobPostInput`, `DecoratorFn`, `MetronomeNote`, `BundlingOptions`, `CacheContext`, `ethersProviders.Provider`, `PrimaryButtonProps`, `UniLoginSdk`, `AnalyzableProperty`, `BatchProcessResponse`, `L.List`, `TextToSpeechClient`, `LastfmTopTracks`, `StorageState`, `AnalysisResult`, `StringListNode`, `nock.Scope`, `CreateProjectRequest`, `StyleClasses`, `RPCRequest`, `IConfigurable`, `DeleteYankChangeAction`, `SVGPolygonElement`, `TransformId`, `GX.TexGenMatrix`, `TwoFactorEmailRequest`, `StubBrowserStorage`, `IFooter`, `ConnectionEvent`, `RenderOutput`, `CookieService`, `BinaryEngine`, `TimePoint`, `TagModel`, `RoverWorkload`, `TryPath`, `CodeBuild`, `Intl.DateTimeFormat`, `ExecutionState`, `SignatureData`, `Continue`, `UserPositionsAccount`, `ShorthandProperty`, `FilterFormatterFunction`, `RMSPropOptimizer`, `MetaData`, `AuthTokens`, `GanttGroupInternal`, `AnyState`, `HomebridgePlugin`, `ThyOptionSelectionChangeEvent`, `Db`, `ITimelineData`, `ExtractControlValue`, `Host`, `TypeScriptServerHost`, `OrderBy`, `IFormatterParserResult`, `ace.Editor`, `ChannelMessageUpdate`, `IDEntry`, `EvaluatorFlags`, `InitializationData`, `CephLine`, `SampleUser`, `NineZoneStagePanelManager`, `EditPhotoDto`, `FeedbackActions`, `InitAckChunk`, `comicInterface`, `GethRunConfig`, `paper.CompoundPath`, `VercelClientOptions`, `ProjectVersionMeta`, `DeleteStorageObjectsRequest`, `CurrencyOption`, `TransportTime`, `puppeteer.JSHandle`, `SubscribeState`, `MiddlewareType`, `MutableColorRgba`, `RunContext`, `CtxLike`, `ReminderFormatType`, `SignatureHelpParams`, `NineZoneStagePanelManagerProps`, `Before`, `ImportRecord`, `MethodVisitor`, `PropertyDeclarationStructure`, `NgModule`, `MessageKind`, `ResourceField`, `TMethod`, `AggregatedColumn`, `Magma`, `MouseEvent`, `UpdateStreamCommandInput`, `EnvoyHttpRequestInit`, `TimestampManager`, `PartsType`, `TodoListModule.Actions`, `ExtractActionFromActionCreator`, `Languages`, `SymbolIntervalFromLimitParam`, `sdk.SpeechRecognitionCanceledEventArgs`, `AppImages`, `IClusterContext`, `Networks`, `MetadataSchema`, `HTMLHRElement`, `I18nStart`, `DeserializeWireBaseOptions`, `HttpHandler`, `ContractTransactionOverrides`, `SystemPortalSelectionTag`, `DotnetInsights`, `DebugProtocol.ThreadsResponse`, `MigrationLifecycleStates`, `MessengerData`, `WritePayload`, `ParamsOfAppDebotBrowser`, `DatasetResource`, `PrincipalTokenCurveTrie`, `chrome.runtime.Port`, `MapPoint`, `Filler`, `DigitalComponent`, `Entity`, `UserRoleService`, `TransactionFormState`, `JsxAttributes`, `DropdownProps`, `Clipper`, `AssetEvent`, `OutputType`, `Arity`, `Dictionary`, `TestResource`, `ConsoleTransportInstance`, `IncrementalQuinTree`, `UseGoToFormConfig`, `UIFont`, `triggeredTrap`, `FlowExhaustedMatch`, `vscode.OpenDialogOptions`, `FormFieldPreviousValueObject`, `AuthActions`, `VideoRateType`, `ISnapshotOptions`, `GitBuffer`, `Contributor`, `AvailabilitySlotsService`, `AccountStellarPaymentsConfig`, `ViewSize`, `BrowseCloudBatchJob`, `SearchItem`, `VideoPreference`, `NextCallback`, `Processed`, `LambdaFunction`, `DropResult`, `IModDirection`, `IApiStashTabSnapshot`, `OrganizationalUnit`, `ClientCredentialsResponse`, `StateReaderObservableEither`, `VersionArray`, `TrackedAbility`, `NormalizeContext`, `SubscribedObject`, `OutputChannel`, `BuildingFacade`, `LocationDescriptor`, `NotificationCallback`, `StickyVirtualizedListProps`, `Unbind`, `ICommitAuthor`, `Models.OrderStatusUpdate`, `StateWrapper`, `PkgJSON`, `ModelSpec`, `IMedia`, `RouteAction`, `BindingDescriptor`, `ELineTypes`, `LimitOrder`, `HotModuleReplacement`, `IOwnProps`, `ITaskAgentApi`, `BotTelemetryClient`, `TypeDefinition`, `MessageStatus`, `GenericTagId`, `RBNFRule`, `LnRpc`, `BoosterConfig`, `babel.ObjectExpression`, `IProblem`, `ContractMethodDescriptorClient`, `TaskResult`, `FunctionArgument`, `IAchievement`, `database.DataSnapshot`, `Formula`, `FindListOptions`, `EnabledPoliciesPlan`, `BufferArray`, `EnvoyContext`, `EntityMaterialParameters`, `Sync`, `CheckNodeResult`, `ODataPathSegmentsHandler`, `CodeBlock`, `ICredentialType`, `PlayOptions`, `WindowProtocol`, `UserRepresentation`, `IExtraArgument`, `SyncDBRecord`, `RecursivePartial`, `TriggerType`, `IconifyBrowserCacheType`, `coreClient.FullOperationResponse`, `AuthorizedClientRequestContext`, `ScriptLoaderService`, `BindingMetadata`, `MatDialogContainer`, `ServerRegion`, `CryptoCurrency`, `ProgressHandler`, `StringTable`, `SelectableListState`, `DataflowState`, `SplitLayoutNode`, `PoseNetConfig`, `UseFormValues`, `ConditionTypeEnum`, `UpdateUserCommandInput`, `SectionComponent`, `IModify`, `WifiNetwork`, `IKeyboardFeatures`, `IScalingPolicy`, `TestFormComponent`, `FormattedExecutionResult`, `WorkflowModel`, `Namespace`, `AnnotationProviderBase`, `FirebaseApp`, `StylableTransformer`, `EidasResponse`, `CppRequestSpan`, `LangiumLanguageConfig`, `Tests`, `ParsedData`, `Flattened`, `DragHandle`, `TransitionService`, `FormatterConfig`, `Crumb`, `ListSettings`, `CompilerTargetHandler`, `HandPoseOperatipnParams`, `THREE.Vector3`, `OpeningHour`, `CookieSerializeOptions`, `DemoMeta`, `StringToUtf32`, `fs.ReadStream`, `ValidateResult`, `AggsStartDependencies`, `DropAction`, `CGAPIResourceHandle`, `ColorObject`, `DraftEditorCommand`, `PvsResponse`, `ManagementDashboardForImportExportDetails`, `MqttClient`, `Singleton`, `ErrorMessages`, `Facebook`, `bindable.BindingOptions`, `PlayerViewCombatantState`, `t.AST`, `Attendee`, `StatefulChatClientWithEventTrigger`, `OperatorToken`, `Ped`, `requests.ListViewsRequest`, `PanelSide`, `ListWorkRequestErrorsRequest`, `JsonClassTypeOptions`, `TDiscord.MessageReaction`, `ParsedNumber`, `KnownFile`, `NodeOutput`, `PDFTextField`, `QueryCreateSchema`, `CancellationId`, `IICUMessageTranslation`, `SqliteStatement`, `EdmxEnumMember`, `IHotKeyConfig`, `ValueAxis`, `ChangesType`, `LayoutElement`, `FeatureManager`, `LCH`, `ex.Scene`, `KeyVaultManagementClient`, `StyleResourcesLoaderNormalizedOptions`, `VerifiedStateUpdate`, `Taint`, `IAureliaClassMember`, `Toggle`, `ITargetReference`, `NofloComponent`, `RequestQueryParser`, `WorldBuilder`, `TestPage`, `CellRenderer`, `MediationRecipientService`, `BehaviorDescription`, `IApiServer`, `FindWithRegexCb`, `TrezorTransport`, `ICreatorOptions`, `ObjectRemover`, `ActivityService`, `FibaroVenetianBlindCCSet`, `WithdrawStakingRewardUnsigned`, `InternalServerErrorException`, `FeatureItem`, `V1DeleteOptions`, `ObservableThese`, `CodeMirror.Position`, `JsxAttribute`, `NzNotificationService`, `ConfigMigrator`, `GoogleAuthProvider`, `StateDecoratorAction`, `Marks`, `SVGStyle`, `UpdateDomainResponse`, `SafeParseReturnType`, `ArcadeBody2D`, `HalResource`, `BoardEvent`, `TreeviewItem`, `ParsingExtension`, `LinePointItem`, `TextVerticalAlign`, `ICircle`, `CreateGroup`, `NamedTensorMap`, `ExecuteResult`, `React.ReactNodeArray`, `server.TextDocument`, `LoadAction`, `UrlMapping`, `d.ComponentRuntimeMembers`, `PosSpan`, `SingleOrMultiple`, `DepListener`, `requests.ListStreamsRequest`, `JNICallbackManager`, `NavigationEvent`, `TSAudioTrack`, `ICancellable`, `SignalingClientSubscribe`, `ViewDefinition`, `Lane`, `ResourceConflictException`, `EventStore`, `AccountDetails`, `DOMInjectable`, `ErrorLike`, `ICompetition`, `NodeWithScope`, `BaseNode`, `DevtoolsInspectorState`, `RegionInfoProvider`, `C3`, `DebugProtocol.SetBreakpointsArguments`, `MVideo`, `MDCDrawerAdapter`, `ElasticSearchOptions`, `IProductCreateInput`, `IRating`, `MdxModel`, `apid.GetReserveListsOption`, `ExecutionRole`, `Stylable`, `BoosterGraphQLDispatcher`, `TuxedoControlCenterDaemon`, `RestElement`, `RaguServerConfig`, `RuntimeDatabase`, `PlaceholderReference`, `TrimmedDataNode`, `NavigationBarNode`, `TokenizerState`, `FormFieldType`, `KudosPollService`, `IConfirmProps`, `NamePath`, `ISubscription`, `requests.ListPublicationPackagesRequest`, `Archive`, `TextAlign`, `InstancedMesh`, `UserPhotosState`, `Z64Online_EquipmentPak`, `SnippetModel`, `SpriteVID`, `EncodeOption`, `GetDomainItemsFn`, `InterceptorManagerUseParams`, `ReportingDescriptor`, `JsxFragment`, `GroupInput`, `requests.ListConfigurationsRequest`, `EnhancementCache`, `ExtrusionFeature`, `BasicBlock`, `UnbindFn`, `TextDocumentContentProvider`, `CompileState`, `GroupMember`, `PageDependencies`, `HeatmapVisualizationState`, `MinMaxConstraint`, `UIDialog`, `CallSite`, `IMappingState`, `MockOptions`, `TranslatorService`, `ChemController`, `EffectPreRenderContext`, `ErrorTransformer`, `ToggleCurrentlyOpenedByRoute`, `MomentData`, `AgeRepartitionType`, `ITransitionData`, `TuplePage`, `PumpCircuit`, `CalendarContext`, `MspDataView`, `HdStellarPayments`, `ComputedStyle`, `IProjectWizardContext`, `TEX1_TextureData`, `MeiliSearch`, `CoreImageEnt`, `ID`, `ManyToManyPathMap`, `HookReturn`, `CasesClientMock`, `UA`, `Solar`, `PrismaClientRustPanicError`, `AnnotationControllable`, `EmbeddedViewRef`, `AsyncHierarchyQuery`, `ProjectionMetadata`, `NotificationStartedInfo`, `ISubView`, `Keymap`, `GetDeviceCommandInput`, `IterationTypes`, `BookmarkTreeItem`, `CursorQueryArgsType`, `requests.ListHttpRedirectsRequest`, `StaticConfig`, `Models.Timestamped`, `IEmailOptions`, `ELU`, `unwrapContext`, `SmallMultiplesSpec`, `NativeStorage`, `ast.QuoteNode`, `TestStepResultStatus`, `MatrixDynamicRowModel`, `GetTokenResponse`, `ITransactionData`, `AriaLivePoliteness`, `CliScriptGenerator`, `ThyFormValidatorGlobalConfig`, `IDynamicPerson`, `QObject`, `IHubSearchOptions`, `JestExtRequestType`, `PointModel`, `JsonUnionsCommandInput`, `PostProcess`, `Model.Element`, `TokenDocument`, `requests.ListIPSecConnectionTunnelSecurityAssociationsRequest`, `StepVariable`, `sdk.TranslationRecognizer`, `SupervisionCCGet`, `HostStatus`, `CmsService`, `J3DModelData`, `ServiceList`, `ApplicationCommand`, `IIconItem`, `Pick`, `RecvDelta`, `RemoteRenderInfo`, `AccountStatus`, `ClassProperty`, `ClassResources`, `PrismTheme`, `UnitOfWork`, `pxt.Asset`, `ContainerRegistry`, `CoreTypes.TextAlignmentType`, `InternetGateway`, `messages.TestStepResultStatus`, `AqlQuery`, `OptionsService`, `UserAsset`, `TreeViewInfo`, `Strings`, `OutputGroup`, `pf.StackContext`, `SchemaValidatorFunction`, `ListContext`, `DataEntity`, `IGetTimesheetInput`, `ModdedDex`, `html.Element`, `ts.ArrayTypeNode`, `ToggleType`, `PartyCreate`, `OptionalObjectSchema`, `TransportConfiguration`, `MapLeafNodes`, `ReadOnlyReference`, `PawnFunction`, `HostState`, `DeleteApplicationOutputCommandInput`, `ServiceBase`, `IEndpointSpec`, `DispatchFunction`, `String`, `IInvoice`, `RTCPeerConnection`, `Parser.SyntaxNode`, `TreeModelNodeInput`, `LoopConverter`, `TypeOrmHealthIndicator`, `TranslationWidth`, `Http3RequestNode`, `QuestionStatus`, `RelationPattern`, `ImportNameWithModuleInfo`, `ParsedTsconfig`, `WayPoint`, `SharedDirectory`, `InvalidArnException`, `IQService`, `LocationInfo`, `RoutableComponent`, `MuteConfiguration`, `AlainAuthConfig`, `CreateScriptCommandInput`, `XMLElement`, `FourSlash.Range`, `Accessibility.ChartComposition`, `StakingCall`, `HtmlTag`, `RouteParam`, `BrowserLaunchArgumentOptions`, `SavedObjectOpenSearchDashboardsServicesWithVisualizations`, `FontInfo`, `API.storage.PrefObserverFactory`, `ILoader`, `Timetable`, `SliceAction`, `TSelections`, `AsyncHooksContextManager`, `Multicast`, `NativeView`, `FeatureKibanaPrivileges`, `Variables`, `ImportLookupResult`, `SecretManagerServiceClient`, `GlobalEventDispatcher`, `IVideoPlayerState`, `RedditComment`, `THREE.ShaderMaterialParameters`, `PathParams`, `CopyResults`, `requests.ListVnicAttachmentsRequest`, `Nameserver`, `AudioContextManager`, `Wiki`, `ConfigSetter`, `FSOperator`, `PyrightJsonResults`, `StreamReader`, `vscode.Selection`, `CreateEventSubscriptionCommandInput`, `VisitedItem`, `DiscordUser`, `InputType`, `IComponents`, `OctokitProvider`, `TestConfig`, `GenericRequestMapper`, `PropertyChangeData`, `MentionsState`, `paper.ToolEvent`, `KeyCode`, `L.Map`, `WlPeer`, `BaseAdapterPool`, `EnumTypeComposer`, `HypermergeWrapper`, `ProjectConfiguration`, `CompletionEntryDetails`, `IDOMRule`, `IGitAccount`, `IPluginAuth`, `V1CustomResourceDefinition`, `LocationChange`, `RulesProvider`, `AddressNode`, `requests.ListProtectionRulesRequest`, `Synthetic`, `UpdateCommand`, `UIViewAnimationTransition`, `Receipt`, `AddToLibraryActionContext`, `kChar`, `ZesaruxCpuHistory`, `PopupOptions`, `OwnProps`, `EventForDotNet`, `ListDeploymentsCommandInput`, `ContractsSection`, `IDatepickerLocaleValues`, `AuthorizationService`, `IOSDependencyConfig`, `StorageHelper`, `TimerInfo`, `ChannelMessageAck`, `CardRequirements`, `Danmaku`, `ListWorkRequestsResponse`, `INgWidgetEvent`, `ScaffdogError`, `PrimedGroup`, `TiledMapFeatureData`, `LogSampleTimestamp`, `DAL.DEVICE_ID_BUTTON_A`, `TLE.FunctionCallValue`, `ClippedRanges`, `AlignmentDirectional`, `AbstractControlState`, `ResolvedFunctionTypeParam`, `Technical`, `DirectiveHook`, `NTPTimeTag`, `FileBrowser`, `ICompareValue`, `MiRectConfig`, `AnyShape`, `ISeriesApi`, `EditModes`, `DecorateContext`, `TemplateExecutor`, `JSXContext`, `ContractFraudProof`, `BubbleSeries`, `LifelineHealthCheckResult`, `HitSensor`, `ViewAction`, `MatomoTracker`, `PanInfo`, `VectorOrList`, `Static`, `Distribution`, `KirbyAnimation.Duration`, `DAL.DEVICE_ID_BUTTON_AB`, `DeleteDetectorCommandInput`, `DeployLocalProjectConfig`, `RouterConfigOptions`, `SPDestinationNode`, `StandaloneDb`, `ClientAPI`, `MenuItemType`, `RouteComp`, `ConnectionPool`, `estypes.SearchHit`, `ContractInstance`, `Unregistration`, `IrecService`, `RouteSegment`, `AccessorCache`, `ConfigMap`, `OutHostPacket`, `FlipSetting`, `CookiesFilterParams`, `LiteElement`, `Htlc`, `SignalRConfiguration`, `NetworkPluginID`, `IMidwayContainer`, `Pair`, `ActionPlugin`, `DeleteSessionCommandInput`, `TransferItem`, `MessagingOptions`, `IUserRole`, `YearProgressModel`, `UpdateRegistryCommandInput`, `BinSet`, `TELibCall`, `FIRQuerySnapshot`, `PresentationTreeDataProvider`, `ITooltipProperty`, `Token`, `GatewayConfig`, `sinon.SinonSpy`, `MagitRemote`, `RequestPayload`, `TextOrIdentifierContext`, `IAuthUser`, `TestingUser`, `GUIController`, `FilterDefinition`, `SpriteArray`, `JsxChild`, `ExistingAccountError`, `ResponsiveProperties`, `HTMLIonToastElement`, `ObjectID`, `VsixInfo`, `DescribeHomeRegionControlsCommandInput`, `Tooltip`, `ImportNamespace.Interface`, `QuadrantDirection`, `RunLengthChunk`, `Keplr`, `DocumentSymbol`, `VideoInputDevice`, `common.Region`, `VisibilityMap`, `PopupManager`, `CallMethodResult`, `ListChannelMessagesCommandInput`, `CancelSubscription`, `PackageFiles`, `IConfiguration`, `TestEntry`, `EYaml`, `RouterDirection`, `TList`, `DeleteUser`, `CacheValue`, `PDFAcroTerminal`, `ImgAsset`, `StatefulCodeblock`, `SystemStats`, `ShareUserMetadata`, `SlideDirection`, `EventMessage`, `Dereferenced`, `PolymorpheusContent`, `IPageModel`, `InjectorServer`, `Events.pointerwheel`, `WorkspaceHost`, `TypesImportData`, `MalformedPolicyDocumentException`, `MigrationDiff`, `Vis`, `DiscogsReleaseInfo`, `SxParserState`, `jsPDF`, `WssRoom`, `SyncStore`, `HsLayerDescriptor`, `IdeaEntity`, `IBranding`, `CopyDescriptor`, `CallStatus`, `StatusAction`, `AggsCommonSetup`, `types.IActionContext`, `Hub`, `StellarSignatory`, `InteractionSettings`, `FlatVector`, `TextDocument`, `Contract`, `INestMicroservice`, `TestSourceIO`, `LinkedHashMap`, `Portable`, `TronUnlock`, `Generics`, `instantiation.IConstructorSignature8`, `PublicUser`, `PeerSetupWithWallets`, `vec4`, `Multi`, `RegInfo`, `monaco.editor.IStandaloneCodeEditor`, `XLSX.WorkBook`, `ZoomSettings`, `AbstractVector`, `StatusCodes`, `TransientBundle`, `GoogleAnalyticsService`, `uproxy_core_api.Update`, `GitTagReference`, `NotificationColumnFilters`, `Registration`, `EpochIteratorConstructor`, `TextureState`, `Colord`, `IVocabularyTag`, `BoardSlice`, `XYState`, `ImageData`, `FunctionC`, `MockModelRunner`, `ParserEnv`, `AnimGroup`, `InstanceNamedFactory`, `Tensor2D`, `JSX.TargetedKeyboardEvent`, `VisTypeDefinition`, `Operands`, `EChartOption`, `BackupRequest`, `IScalingProcess`, `ReactionMenuState`, `null`, `StudyConstraint`, `HsLayerSelectorService`, `StaticOperatorDecl`, `ApiKeyProps`, `d.CompilerJsDocTagInfo`, `FindManyOpts`, `ELBv2`, `BasePrismaOptions`, `InputControlVisDependencies`, `ErrorBarStrings`, `DiscussionReplyEntity`, `StoreEnhancerStoreCreator`, `WebGLRenderingContextExtension`, `EntityName`, `EnhancedGitHubEvent`, `ThyTableGroup`, `PiecePosition`, `InternalRouteInfo`, `IServerGroup`, `AssociationCC`, `CandleLimitType`, `HealthCheckService`, `LoadEvent`, `VersionedSchema`, `MissingItem`, `IGCNode`, `TokenVerifier`, `WillExecutedPayload`, `SomeInstance`, `BlockedRequester`, `ColorComponent`, `GithubAuthTokenRepository`, `LineItem`, `CellConfig`, `ResumeNode`, `MessageConversation`, `FakeNumericDataset`, `ApiChanges`, `MediaItem`, `EntityCollectionCreator`, `TimelineDivisionBase`, `SnapConfig`, `ReqWithUser`, `ColonToken`, `SkinId`, `BackstageItemsManager`, `ReadableStreamDefaultReadResult`, `StaticPathLoader`, `UpdateChannelResponse`, `Resolved`, `IconData`, `Tokens`, `DisposableObservable`, `ChartKnowledgeBaseJSON`, `LoadConfigInit`, `SelectionLocation`, `MIRInvokeBodyDecl`, `VisibleBoundary`, `GoogleWebFontData`, `WebhookPayload`, `MigrationsContract`, `KeyboardLayoutData`, `ComponentTable`, `d.SitemapXmpOpts`, `IMoveDescription`, `DirectoryWatcher`, `USampler3DTerm`, `KMSKeyNotAccessibleFault`, `MarkdownSimpleProps`, `IDelta`, `PropertyConverterInfo`, `FfmpegCommand`, `TypescriptServiceClient`, `StateTransition`, `MdcDialog`, `IAppState`, `SystemState`, `ClassLikeDeclaration`, `SymbolOptions`, `TokenConfigs`, `BinaryEncoding`, `IContainerType`, `PersistedLog`, `UsageCollectionPlugin`, `instantiation.IConstructorSignature6`, `IBlockchainObject`, `DecipherGCM`, `DecodeContinuouslyCallback`, `cloudwatch.Metric`, `TaggedState`, `AggTypeAction`, `JProject`, `LayoutOption`, `NETWORK`, `IFoundElementHeader`, `UserBuildConditionals`, `TreeResult`, `TensorOrArrayOrMap`, `EqualFunc`, `_Connection`, `ContentLayoutDef`, `CombinedField`, `Accent`, `SelectQueryBuilder`, `TransformerArgs`, `ItemContext`, `TrackInfo`, `ClickSource`, `IEmbedVideoOptions`, `ThyFlexibleTextComponent`, `PipelineNode`, `PayloadBundleSource`, `PaginationParams`, `vscode.Location`, `RepositoryCommonSettingValueDataType`, `TexturizableImage`, `ProviderInput`, `ICassExploreModuleState`, `HttpConfig`, `LanguageService`, `VariantType`, `HaliaPlugin`, `FeaturedSessionsActions`, `ChatFlowPack`, `LayoutPane`, `ThemesTypes`, `OrderedHierarchyIterable`, `MockResponseInit`, `MouseDownEvent`, `React.PropsWithoutRef`, `CollectionService`, `Blobs`, `ParameterToken`, `CaseReducerActions`, `Vec2Sym`, `PriceHistoryMap`, `DiffPatcher`, `DeleteContactCommandInput`, `JwtHeader`, `AnyTable`, `IMockEvent`, `ElementDefinitionContext`, `VersionConstraintContext`, `TruthTable`, `ISPTermObject`, `GfxrAttachmentClearDescriptor`, `HdLitecoinPaymentsConfig`, `Monad2`, `SchemaInput`, `TLeft`, `DisplayValuePropertyDataFilterer`, `CreateJoinQueryOptions`, `HeaderGetter`, `StorageModuleOptions`, `WorkspaceSetting`, `KintoObject`, `ReplayContext`, `HtmlProps`, `DocumentLink`, `ModLine`, `IAstElement`, `ValueResolver`, `IRequestHeaders`, `CellPlugin`, `WebWorkerEnvironment`, `ArrayBufferLike`, `Entity.Status`, `NextPage`, `ConfigInfo`, `ExternalSourceFactory`, `OpenSearchInterval`, `IStyle`, `SignedByQuantifier`, `DecodeOutput`, `NzTabSetComponent`, `PlayerStatus`, `CardCommon`, `ConfigRoot`, `ExecutionStatus`, `CephAngle`, `TasksState`, `QueryResultProps`, `Rating`, `ExtendedUser`, `RebirthWindow`, `LogSource`, `P3`, `GridItemHTMLElement`, `GroupBy`, `ICheckboxProps`, `FsWriteOptions`, `GitService`, `d.HostRule`, `EntityMap`, `TransformCssToEsmInput`, `NullableLocatable`, `XSLTToken`, `PartialExcept`, `CellRepo`, `JavaScriptDocument`, `GroupOrName`, `LQuery`, `InvalidateMask`, `LanguagesEnum`, `ListContentConfig`, `PodSecurityPolicy`, `NativePlaceTradeChainParams`, `Github.PullRequestsGetResponse`, `BuildingTree`, `ViewportSize`, `DepthwiseConv2DLayerArgs`, `CommandLineParser`, `ScanArguments`, `IconSettings`, `ArmService`, `IDataState`, `Branched`, `CompletionContext`, `Klass`, `ManualConflictResolution`, `T5`, `JsonfyDatasource`, `DefinerClauseContext`, `JssState`, `NoneType`, `BoardBase`, `ObjectUpdatesEntry`, `ExceptionType`, `IEcsServerGroupCommand`, `SetOpts`, `TaskSpec`, `RestOrderbookRequest`, `DOMMatrixInit`, `Parallelogram`, `QualifiedValueInfo`, `VisualizeSavedObjectAttributes`, `GlobalParametersService`, `IPostMessageBridge`, `HelmManager`, `Dog`, `SignalRService`, `XmppChatAdapter`, `DataSink`, `RawBlock`, `Ninja`, `HybridOffsets`, `BrowserControllerReturn`, `SwitchWatcher`, `IntersectionInfo`, `SyncState`, `TypeClass`, `EventsFactory`, `DayCellStyle`, `ParsedStructure`, `Bucket`, `FunctionFragment`, `PopoverPosition`, `IOrganizationDepartmentCreateInput`, `TxParams`, `requests.ListLoadBalancersRequest`, `Hsv`, `CuePoint`, `BreakpointMap`, `Node.Node`, `ContentReference`, `RecordData`, `DropListRef`, `ethereum.CallResult`, `CLM.Template`, `UpdateState`, `FastFormFieldMeta`, `GeometricElement3dProps`, `ContractFunctionEntry`, `DeleteServiceCommandInput`, `ECH.CommandClient`, `ScopeType`, `FileTransport`, `ts.SignatureDeclaration`, `Graphic`, `DejaViewPortComponent`, `http2.ClientHttp2Session`, `CSS.Properties`, `DirectiveList`, `ComponentEnhancer`, `TransactionCache`, `ActionResult`, `MiToolsSyncData`, `PacketChunk.TypeTCCStatusVectorChunk`, `ElementGeometryInfo`, `configuration.LaunchConfiguration`, `CreateManyInputType`, `ClientCardIded`, `SecurityRule`, `CANNON.Vec3`, `d.SourceTarget`, `CompositeSubscription`, `Place`, `OptionalDefaultValueOrFunction`, `IpcAPI`, `BitcoinBalanceMonitor`, `PublicRelayerConfig`, `JobId`, `Biota`, `NodeCreator`, `ConciseBody`, `OpenNodeTracker`, `Degree`, `CeramicCommit`, `CreateAppFunction`, `ConflictNode`, `ResourceSource`, `Option`, `IncludeRecord`, `Cart`, `IRouteMatch`, `ResponsiveService`, `ChatMessage`, `RpcMessageData`, `FloatFormat`, `NewWindowWebContentsEvent`, `HitDetail`, `NexusScalarTypeDef`, `UseCaseExecutor`, `IUserOrganization`, `SegmentAPIIntegrations`, `Broker`, `DMMF.ModelMapping`, `Oni.Plugin.Api`, `TypeKindEnum`, `CorePreboot`, `Snake`, `MarkdownTreeNode`, `GetMeetingCommandInput`, `YRange`, `QueryMap`, `PackagedNode`, `AcceptTokenResponse`, `TransportRequestOptionsWithMeta`, `d.TypeInfo`, `SMTCallSimple`, `ClipRectAreaModel`, `HistoryValue`, `Rest`, `FakerStatic`, `AZDocumentSymbolsLibrary`, `ProjectPost`, `RulesClientFactory`, `SymbolData`, `CreateEmManagedExternalExadataMemberEntityDetails`, `GherkinLine`, `Commune`, `RelationIndex`, `Translator`, `CommonState`, `StreamDescriptions`, `ReturnCode`, `GitUser`, `NetworkSubgraph`, `BaseStateContainer`, `IMutableGridCategoryItem`, `ListMigrationsRequest`, `GenerateInFolderOptions`, `DeleteAssociationCommandInput`, `S1`, `ImportAdder`, `TurnContext`, `IHttpInterceptController`, `BaseName`, `requests.ListConnectionsRequest`, `DrawEvent`, `JoinedEntityMetadata`, `IICUMessageCategory`, `RewriteAppenderConfig`, `GenericCompressorProperty`, `AnalyzableNode`, `NoticeToastRequest`, `CategoryChannel`, `SignedCipherObject`, `FKRow`, `MetricService`, `LiteralTypeBuildNode`, `FragmentElement`, `DocumentInterface`, `AuthorizeConfig`, `EthArg`, `T9`, `PointerCoords`, `IFuture`, `CompleterComponent`, `OpenSearchDashboardsLegacyPlugin`, `TemplateOutput`, `ProjectionType`, `Bm.Dest`, `AR`, `ListServersCommandInput`, `inquirer.Answers`, `ListReservationsCommandInput`, `ICircuit`, `GroupedTask`, `OperationLoader`, `ParamFunction`, `TxOutput`, `Gunzip`, `ImageTemplate`, `DataGroup`, `SlashingProtectionBlock`, `AuthenticatedSocket`, `requests.ListImageShapeCompatibilityEntriesRequest`, `ForceGraphLink`, `IJsonDocument`, `SchemaProps`, `ECPair`, `SubtitlesState`, `ErrorCode`, `JQuery`, `NamedFragmentDefinition`, `KratosService`, `ClassName`, `TTK1AnimationEntry`, `RelayModernEnvironment`, `BullBoardRequest`, `WebpackConfigurator`, `TestContract`, `CloudFrontRequestEvent`, `ColumnChartOptions`, `AClassWithSetter`, `GeoLocation`, `DateRawFormatOptions`, `LoadOpts`, `WudoohStorage`, `MigrationItem`, `AuthTokenService`, `Mode`, `DomService`, `CertificateConfigType`, `ABLDocument`, `IErrorsBySection`, `GridCellParams`, `IAbstractGraph`, `ErrorHttpResponseOptions`, `AnyExpressionTypeDefinition`, `MappingLine`, `PUUID`, `AdapterContainer`, `Generations`, `SBDraft2CommandLineToolModel`, `SpriteState`, `PromiseMap`, `LogicalType`, `HttpResponseCreated`, `MockCallback`, `CipherView`, `FloodProcessEnv`, `NodeStack`, `Events.pointerdragleave`, `HasShape`, `TenantSettingService`, `RawTransaction`, `TabRepository`, `SelfList`, `MeasureUnit`, `FileStatusBar`, `OutputContext`, `HelloMessage`, `E2EElementInternal`, `express.Handler`, `QueryListsCommandInput`, `TextLayoutStyle`, `TimeQuery`, `AggName`, `requests.ListStacksRequest`, `ResourceOptions`, `OidcProviderService`, `BooleanFilterFunction`, `KanbanBoardRecord`, `DeleteCertificateCommandInput`, `TraceSet`, `Response.Wrapper`, `GraphProps`, `IntersectionObserverCallback`, `GundbMetadataStore`, `ColumnNode`, `TestWorkspaceFactory`, `Operation`, `ApplicationContext`, `ml.Element`, `ExchangeQueryService`, `LogLevelValues`, `TestDoc`, `ApiResource`, `CreateResourceCommandInput`, `ProjectedPoint`, `DropdownItem`, `CALayer`, `BuildEntry`, `requests.ListNotebookSessionShapesRequest`, `ICategoricalLikeColumn`, `PropsWithChildren`, `SafeElement`, `DebugSession`, `INavigationData`, `ServerOptions`, `LendingPool`, `OutputTargetCopy`, `PusherChannel`, `JwtToken`, `pino.Logger`, `GroupDescription`, `requests.ListComputeCapacityReservationInstanceShapesRequest`, `ScrollItem`, `NodeLoadInformation`, `IPagination`, `ChartDataPoint`, `ThemeContextType`, `ZoomLevels`, `RegisteredMessage`, `PushContextData`, `CodeScopeProps`, `LanguageTag`, `ArticleState`, `ProfileInfo`, `CategoryProps`, `GeographicCRSProps`, `MatchedSegments`, `TrackQueryOpts`, `ProtocolNotificationType`, `jwt.SignOptions`, `ServerUtil`, `d.ScreenshotBuildData`, `ElementAction`, `IDataFilter`, `ISequencedOperationMessage`, `PayloadMetaAction`, `EventNames`, `TimerOptions`, `TableOfContentsEntry`, `JsonAtom`, `IPersonaProps`, `Sampler2DTerm`, `EdgeProps`, `AST.OperationNode`, `PlugyStash`, `OperatorEntry`, `VisualSettings`, `BinaryToTextEncoding`, `FSError`, `BB.Activity`, `HttpLinkHandler`, `Basis`, `DeleteJobTemplateCommandInput`, `ColumnMeta`, `Variance`, `ElementFinder`, `ListChannelMembershipsCommandInput`, `AccountActions`, `CommunicationParticipant`, `QueryParameterBag`, `HealthChecker`, `ValidatorFunctionType`, `NormalizedProblem`, `IRowProps`, `JIssue`, `PartialPerspective`, `PlainData`, `Authorizer`, `SqliteDatastore`, `MsgDeleteProviderAttributes`, `CoreTypes.TextTransformType`, `GuildDocument`, `MatchedItem`, `LocaleType`, `MqttOptions`, `HttpErrorResponse`, `Web3.TransactionReceipt`, `IConfigurationComponent`, `IExpenseCategory`, `ClassAst`, `BinanceConnector`, `Update`, `SubmissionService`, `SourceConfig`, `EtaConfig`, `SearchServiceStartDependencies`, `V1ConfigMap`, `ProcessedImportResponse`, `VueApolloRawPluginConfig`, `InvalidRestoreFault`, `IReport`, `XAxis`, `HttpStart`, `DbBlock`, `SortablePlayer`, `Abi`, `MdcDefaultTooltipConfiguration`, `ClassListing`, `BuildDefinition`, `LineAnnotationDatum`, `DinoRouter`, `IQueryInput`, `DeleteReplicationConfigurationTemplateCommandInput`, `GLclampf4`, `ILayer`, `ast.Node`, `Whiteboard`, `SendMailOptions`, `RAL.MessageBufferEncoding`, `ShadowRootInit`, `coreAuth.TokenCredential`, `ResourceLabelFormatter`, `SearchForLife`, `UpdateTargetMappingsWaitForTaskState`, `Attribs`, `ActionCreatorWithOptionalPayload`, `AddEventListenerOptions`, `Forecast`, `EdmTypeField`, `RangeSet`, `PLSQLConnection`, `Dealer`, `TreeType`, `ListIndicesCommandInput`, `GunScopePromise`, `CrudOptions`, `ActionTypeExecutorResult`, `STHConfiguration`, `TransmartConstraint`, `TFS_Build_Contracts.Build`, `DAL.KEY_T`, `DBDriver`, `GetInsightSummariesCommandInput`, `IPhase`, `CompilerErrorResult`, `DiagnosticOptions`, `WebSession`, `DeployProps`, `monaco.editor.IModel`, `SpreadSheet`, `StorageData`, `DiscordMessageProcessor`, `CalloutProps`, `ProductOptionGroup`, `UnitCheckboxComponent`, `CreateJobCommand`, `HeaderPair`, `ZBarInstance`, `IFieldPath`, `CircuitBreakerOptions`, `QueryHelperService`, `ResAssetType`, `TableColumnConfig`, `DeploymentSummary`, `ECSqlStatement`, `FirestoreAPI.Value`, `CurrentDevice`, `MdxModelInstance`, `SGItem`, `vscode.QuickPickItem`, `VfsObject`, `ApplicationListenerArgs`, `UpdateConnectionDetails`, `PropertyEditorInfo`, `MassMigrationCommitment`, `VectorArray`, `DRIVERS`, `StacksTransaction`, `VirtualData`, `WritableFilesystem`, `NotificationDocument`, `SwapOptions`, `ClientFile`, `HmiService`, `ListImportsCommandInput`, `HttpArgumentsHost`, `LegendStrategy`, `AspidaResponse`, `PeerService`, `AuthenticationDetailsProvider`, `OperatorDescriptor`, `RunCommandInput`, `Eq.Eq`, `PanelPoints`, `BottomBarArea`, `IMutableVector3`, `ConcreteRequest`, `MethodParams.ProposeInstall`, `MockUdpTally`, `ExpandableTreeNode`, `FileStat`, `AnimationEntryMetadata`, `ListIdentityProvidersRequest`, `Loop`, `LocaleTemplateManager`, `ValueOrLambda`, `Cached`, `ExecResult`, `STATUS`, `TraceSpan`, `JSDocNameReference`, `ts.VariableStatement`, `Toast`, `GetConfigCommandInput`, `MIRTupleType`, `StoredNetwork`, `OnDemandPageScanRunResultProvider`, `IFeatureComment`, `Finish`, `GetError`, `InfiniteData`, `FindSubscriptionsDto`, `ComponentTheme`, `MockAttributeMap`, `ZodType`, `AccountCustom`, `CommandExecutionContext`, `Scales`, `AnimVector`, `SettableUserCode`, `CancelSource`, `RawOperation`, `CoreConnection`, `ITreeNodeAttrs`, `t.SelectablePath`, `ISchemaGenerator`, `PCancelable`, `IQResolveReject`, `IButton`, `RenderPassContextId`, `I18nContextType`, `ListAssetsRequest`, `Installer`, `StylingBindingData`, `requests.ListKeysRequest`, `TestVectorResult`, `ITaskWithStatus`, `SyncTasks.Promise`, `ListReportDefinitionsCommandInput`, `AddonProperty`, `MsgCreateProvider`, `Workbench`, `ApplySchemaAttributes`, `ToolbarOrientation`, `SVGPath`, `CreateTestConfigOptions`, `SKFillItem`, `Foxx.Router`, `IProposalCreateInput`, `configuration.APIData`, `TestStream`, `SharedStreetsGeometry`, `EvaluatedMetric`, `KibanaPrivilege`, `FieldFormatId`, `GUI`, `HttpClientConfiguration`, `ExcaliburGraphicsContext2DCanvas`, `ImportCodeAction`, `DataFrame`, `FormFieldEditorComponent`, `MockResolvers`, `ComposedChartProps`, `WebpackChain`, `ListMultipartUploadsCommandInput`, `ResourceTag`, `RetryHelper`, `TreeNodeProps`, `ListTypeNode`, `CipherGCM`, `UpdateChannelMessageCommandInput`, `ConnectionConfiguration`, `DiffResult`, `BridgeContracts`, `DatabaseUser`, `SkillMapState`, `IEmailDomain`, `Logquacious`, `TypeRef`, `MergeStrategy`, `ProductModel`, `TokenRange`, `EmitterInstance`, `CurrencyCNYOptions`, `ProjectParser`, `StateTaskEither`, `MoveCheck`, `IPreviewProps`, `QueryProvidersResponse`, `providers.JsonRpcProvider`, `labelValues`, `IProjectRepository`, `Pen`, `BrowserHistory`, `React.ForwardedRef`, `RpcResult`, `BracketType`, `FnU4`, `Idl`, `OutputDefinitionBlock`, `PersonData`, `Mill`, `CustomQueryHandler`, `WorkspaceService`, `DoneInvokeEvent`, `ReleaseChangelog`, `IKeyboardDefinitionStatus`, `ConfigVersion`, `ts.server.ScriptInfo`, `JMap`, `MergeTree`, `requests.ListAvailableSoftwareSourcesForManagedInstanceRequest`, `GfxClipSpaceNearZ`, `AppInputs`, `Encryption`, `ELogLevels`, `Callsite`, `requests.ListAutonomousDbVersionsRequest`, `GenericDeviceClass`, `DisassociateFromAdministratorAccountCommandInput`, `pulumi.InvokeOptions`, `TexCoordsFunction`, `RenderPass`, `TEdge`, `EnumField`, `ArmSiteDescriptor`, `BoundMethodCreator`, `WalletService`, `ArcShape`, `IGESDocument`, `ObjectLiteralExpression`, `DAL.KEY_DOT`, `AWSPolicy`, `IServiceInjector`, `ServiceSummary`, `EffectResult`, `textFieldModule.TextField`, `OneOf`, `BuildInPluginState`, `InstancePoolInstanceLoadBalancerBackend`, `ExpressionAstExpressionBuilder`, `Knex.SchemaBuilder`, `AxisAlignedBounds`, `AppearanceMapping`, `VRMSchema.VRM`, `handler.Queue`, `TSerializer`, `Transformation`, `DataTypes`, `MapStoreState`, `ServerStyleSheet`, `IProfileModel`, `FormProps`, `NormalizedUrl`, `d.BuildCtx`, `TaskLabel`, `Yeelight`, `VisDef`, `EncryptionAtRest`, `BytecodeWithLinkReferences`, `GitReference`, `CLDRFramework`, `PyVariable`, `CloudFormationStack`, `TranslateList`, `ThemeProviderProps`, `Int16Array`, `DbMicroblock`, `IRegion`, `PmpApiConfigService`, `GroupState`, `NatF`, `RectangleSize`, `ReferenceIdentifier`, `MutableContext`, `SocketChannelServer`, `EncryptionType`, `JSONPropPath`, `OnDiskState`, `LookupFnResult`, `AzureTreeItem`, `ImportTypeNode`, `ODataQueryOptionHandler`, `TransistorEpisodeData`, `GithubUserResponse`, `CodeKeywordDefinition`, `RuleTester`, `IKeymap`, `GroupRegistryState`, `SessionProxy`, `AutorestLogger`, `FromTo`, `LangState`, `B`, `NetworkTraceData`, `RelationQueryBuilderContract`, `KeyboardEventInit`, `WorkBook`, `IsGroupIndentCellFn`, `SortBy`, `GenericClassProperty`, `DescribeRoutingControlCommandInput`, `ISoundSampleDescription`, `TextWithLinks`, `vscode.NotebookDocument`, `IServerParams`, `ImportRelativity`, `SWRHook`, `JobsService`, `FormikHelpers`, `Interview`, `ListHttpMonitorsRequest`, `Fuse`, `CreateForgotPasswordDto`, `ListNotebookSessionShapesRequest`, `EmailConfirmationHandler`, `ObserverLocator`, `AnimatedMultiplication`, `StackDeployOperation`, `windowPositionCalculationState`, `LevelUp`, `NumberLabel`, `IScheduleApiModel`, `DeviceConfig`, `_rest`, `ConfigTypes`, `CSSRuleList`, `CSSMotionProps`, `ConfigurationListItemType`, `DynamicModule`, `ElasticsearchError`, `InternalSettings`, `IDraft`, `DetailedCloudFormationStack`, `T1`, `FeatureCatalogueSolution`, `HandlerMetadata`, `ELO.RankState`, `ResolvedNode`, `RoxieResult`, `RequestHandler`, `GetRuleCommandInput`, `NetWorthItem`, `OpenSearchDashboardsReactOverlays`, `requests.ListInstanceConfigurationsRequest`, `XRangePoint`, `Frontmatter`, `PointerState`, `AnyFunction`, `PathItem`, `ConnectionStore`, `ScullyRoutesService`, `MultiSelectProps`, `VimCompleteItem`, `Blob`, `i18n.Node`, `CheerioOptions`, `SecurityGroupContextProviderPlugin`, `ReadValueIdOptions`, `IFormTemplate`, `HaredoChain`, `ISystemInfo`, `EventTopics`, `TransmartAndConstraint`, `ColorFactory`, `DeployOpID`, `requests.ListDbServersRequest`, `Organization`, `HashTable`, `UiActionsServiceEnhancements`, `PluginCodec`, `FlipCorner`, `Fs`, `TabbedAggColumn`, `ArcRotateCamera`, `NormalisedFrame`, `RSTPreviewConfiguration`, `ILogoProps`, `MockServiceClient`, `ResponseComposition`, `Quest`, `HlsManifest`, `LoadedExtension`, `ResourceInsightProjectedUtilizationItem`, `IWalkthroughStep`, `TimelineById`, `AdminAPI`, `GraphQLType`, `IRasterizedGlyph`, `PipeOptions`, `PedProp`, `ErrorPayload`, `SCN0_Light`, `TUser`, `CreateBackupResponse`, `GasComputation`, `ScrollEvent`, `AppStackMinorVersion`, `EDBEntity`, `SFValue`, `Rule`, `HalfEdgePositionDetail`, `StoredDocument`, `IpPort`, `CharacteristicValue`, `IndexGroups`, `XmlNodeNop`, `GraphError`, `BpmnContext`, `ITypeFilter`, `UpdateResults`, `ContextValueType`, `LoginUserDto`, `HttpRequestWithFloatLabelsCommandInput`, `ContractContext`, `MosaicNode`, `Rotation`, `ISuperBlock`, `RESTService`, `CarModel`, `GulpClient.Gulp`, `EventModelImplUnion`, `TimeRangeInformation`, `ReservedParameters`, `PlistValue`, `VitePluginFederationOptions`, `RegistryDataStream`, `SecretKey`, `StageInterviewRepository`, `FrontstageProps`, `CLM.TrainDialog`, `R2Publication`, `signalR.HubConnection`, `BaseThemedCssFunction`, `CloudFrontResponse`, `React.RefForwardingComponent`, `Internals`, `DataHandler`, `Models.User`, `requests.ListAvailableUpdatesForManagedInstanceRequest`, `TimePeriodField`, `SidePanelOpenDirection`, `ReadModelRuntimeEventHandler`, `NotebookSessionShapeSeries`, `DraftInlineStyle`, `ToggleConfig`, `PaginationInfo`, `RelationMeta`, `SessionTypes`, `IParserOptions`, `AVRInterruptConfig`, `PiCommand`, `ReactCrop.Crop`, `gcp.Account`, `Fx`, `TabItem`, `Evees`, `ActionListener`, `LoopNode`, `HomebridgeLgThinqPlatform`, `TEventRangeType`, `SharedGeometryStateStyle`, `ColumnOrder`, `ServiceURL`, `ArgumentListInfo`, `HostWatchEvent`, `SpacesClient`, `PrintableType`, `IntrospectionTypeRef`, `OperatorType`, `LineWithBound`, `CarouselState`, `StoppingCondition`, `AnimatorFlowValue`, `IReserve`, `WorkspaceInfo`, `HalfEdgeMask`, `core.Coin`, `CreateTableBuilder`, `PropertyEditorParams`, `GlobalStateT`, `TextRangeDiagnosticSink`, `ByteVectorType`, `InvocationArguments`, `protos.google.iam.v1.IGetIamPolicyRequest`, `SchemaObjectMetadata`, `SettingsProps`, `GenericStoreEnhancer`, `FlattenedType`, `RoleKibanaPrivilege`, `GetTestDestinationOptions`, `MessageParams`, `VarScope`, `GridValueFormatterParams`, `DepList`, `ListDataSourcesCommandInput`, `UseSidePanelProps`, `ng.IIntervalService`, `IInstrument`, `DefaultFocusState`, `LayoutSandbox`, `Tween24`, `PageMargins`, `CredDef`, `MessageDataType`, `TemplateOptions`, `ReadonlyJSONObject`, `SyncHook`, `DecoderError`, `OutputTargetWww`, `CausalRepoStore`, `SemanticClassificationFormat`, `StatsAsset`, `HTMLIonPopoverElement`, `VueQuery`, `RenderContext3D`, `ProgressProps`, `IControllerAttributeExtended`, `T8`, `ResolveOptions`, `ReadonlyMat`, `ECPoint`, `AnnotationSpec`, `EnumDef`, `ApiClient`, `UserPaypal`, `ts.IfStatement`, `BuildInstance`, `CloudFrontResponseEvent`, `Import`, `StateByProps`, `CallExpr`, `B12`, `ListChannelMembershipsForAppInstanceUserCommandInput`, `FeatureChild`, `NgbActiveModal`, `WatchEventType`, `SecurityClass`, `BlockClassSelector`, `TransformFnParams`, `VueAutoRoutingPlugin`, `TextData`, `StyleElement`, `IDynamicGrammar`, `SliceState`, `IndexImpl`, `CascaderContextType`, `PatternMappingExpandEntryNode`, `AssembledSubjectGraphics`, `IDictionary`, `AuthorizationCode`, `NotifierPluginFactory`, `SignalingOfferMessageDataChannel`, `FieldsInModel`, `ViewStore`, `SubEntityLocationProps`, `unchanged.Path`, `PerformDeleteArgs`, `IDispatchProps`, `TypePredicate`, `ProseNode`, `Conference`, `FullAgentPolicy`, `CucumberRunner`, `apid.StreamId`, `PaginationResponseV2`, `CalendarEvent`, `HsLaymanLayerDescriptor`, `HierarchyPointNode`, `ts.BindingElement`, `TabState`, `PythonPlatform`, `LineDashes`, `SnackBarOptions`, `ConfigImagery`, `TodoRepository`, `PromiseOrValue`, `Shortcuts`, `Twitter.User`, `ModelSpecBuilder`, `CursorProps`, `PersonChange`, `Description`, `RoomUser`, `ParseNodeArray`, `TouchBar`, `FileLocation`, `ProblemRowData`, `ITagInputProps`, `AnyGuildChannel`, `NoopExtSupportingWeb`, `SubsetPackage`, `IPackageDescriptorMap`, `FilterSettings`, `handleParticipantEvent`, `AST.AST`, `GrpcAuthentication`, `ExportProps`, `FunctionalUseCaseContext`, `Master`, `BackendAPIService`, `requests.ListIntegrationInstancesRequest`, `GetAllRequestBuilder`, `IContainerRuntime`, `TransferBuilder`, `LatestClusterConfigType`, `ConditionalBlock`, `requests.ListAnalyticsInstancesRequest`, `Selection`, `ESSearchSourceDescriptor`, `AsyncThunkAction`, `glm.mat4`, `TPluginsStart`, `TokenLevelState`, `ChainInfo`, `WebKitGestureEvent`, `InAppBrowser`, `TestEmbeddable`, `IPatient`, `AggregatePriceService`, `ImportStateMap`, `IMdcSliderElement`, `DateEnv`, `EncryptionProtectorName`, `NumericScaleLike`, `Filterer`, `Farmbot`, `DIDResolutionResult`, `ValuePredicate`, `SFPPackage`, `Spring`, `CompletionState`, `RetryStatus`, `DisplayInfo`, `PrimitiveAtom`, `ProjectStep`, `HTMLFieldSetElement`, `DocumentContents`, `Yendor.Context`, `Base64Message`, `EventModel`, `ViewportRuler`, `TransmartTableState`, `RepositoryCommonSettingEditWriteModel`, `SCTP`, `ConstantAndVariableQueryStringCommandInput`, `SelectColony`, `WatchEffectOptions`, `CandyDate`, `CoreSystem`, `CBCentralManager`, `FilterMap`, `RadioGroupProps`, `ResolvedValue`, `AwrDbWaitEventBucketSummary`, `FlowCall`, `SiteEntry`, `TreeDataSource`, `UniswapV1Client`, `TestExtension`, `GetAttributeValuesCommandInput`, `LevelDocument`, `CounterMetric`, `ODataRequest`, `OpenSearchRawResponse`, `ListTableColumnsCommandInput`, `protos.google.protobuf.IEmpty`, `IKeyQueryOptions`, `KeyExchange`, `LitecoinPaymentsUtilsConfig`, `am4maps.MapPolygon`, `HoverProviderItem`, `ISliderProps`, `WebdriverIO.Element`, `Kysely`, `AccessExpression`, `CallbackFn`, `AuthenticateModel`, `PromiseFunction`, `API.services.IChromeFileService`, `IScriptSnapshot`, `BitField`, `SerializeNodeToHtmlOptions`, `UpdateProfile`, `ExcludedRule`, `EAggregationState`, `ValVersion`, `InputText`, `IAddress`, `ShotRequestOptions`, `fromTimelineActions.GetTimeline`, `VolumeIndicatorCallback`, `WebStorage`, `Rx.Subscriber`, `Gallery`, `SatObject`, `CredentialProvider`, `IStage`, `ConditionResolution`, `VdmServiceMetadata`, `Point3D`, `CanvasGradient`, `EditSettingsCommand`, `T13`, `ControlContainer`, `DeleteDirectoryCommandInput`, `EuiComboBoxOptionOption`, `TreeContext`, `CreateBidDTO`, `Payload`, `EAVNField`, `EthAsset`, `StudentEntity`, `externref`, `AppViewRoute`, `JSONDocument`, `ResponseBody`, `TomcatServer`, `ArDB`, `DeveloperClient`, `DAL.DEVICE_ID_ACCELEROMETER`, `PanRecognizer`, `OverlaySizeConfig`, `Dimensionless`, `IProxy`, `Quantity`, `pd.FindSelector`, `StorageLocationModel`, `RequestTemplateDef`, `Int32`, `SchemaObject`, `SnapshotRestoreRequest`, `GetStateParams`, `GetDeclarationParameters`, `vscode.TestItem`, `GRUCell`, `ChannelSettings`, `AuthMetadata`, `MergedBuildFileTask`, `CreateParams`, `OnNumberCommitFunc`, `InterviewPrizePlaylist`, `ResolvedVersion`, `RgbaTuple`, `pointInfoType`, `PrunerT`, `Avatar`, `React.CSSProperties`, `V1`, `PDFRawStream`, `UserDomain`, `RegionFieldsItem`, `EventEnvelope`, `VfsStat`, `SNS`, `ServiceException`, `SubShader`, `GfxTopology`, `MqttMessage`, `UIAlert`, `ClassMemberLookupFlags`, `TypeConstraint`, `DisplayObjectWithCullingArray`, `HTMLBaseElement`, `InternalOpAsyncExecutor`, `ChartDataItem`, `ITextAreaProps`, `StoredAppChallenge`, `UpdateRuleGroupCommandInput`, `Req`, `IDateRangePickerState`, `UserOrganizationService`, `SourceASTBuilder`, `BaseConvLayerArgs`, `IPAddressEntry`, `UseBoundStore`, `Element_t`, `HostPort`, `StatusMessageService`, `TestModelVersion`, `EnumRow`, `EnhancedModuleGraph`, `PaddingMode`, `ITracerProfile`, `CoercibleProperty`, `IBindingWizardContext`, `RequestArguments`, `UpdateCampaignCommandInput`, `ProposalIdentifier`, `RequestQueryOptions`, `AveragePooling1D`, `DescribeContactCommandInput`, `ArrayMap`, `CreateBucketCommandInput`, `SignalingClientConnectionRequest`, `ImplicitImport`, `Measurements`, `NodeBuilderFlags`, `NodeLinks`, `TextDocumentEdit`, `PLIItem`, `OnboardingLightData`, `ISourceMapPathOverrides`, `ApexLibraryTestRunExecutor`, `ProcessedCDPMessage`, `TokenFields`, `SnippetOptions`, `VApiTy`, `FindTilesAdditionalParameters`, `TableListItem`, `ListViewEventData`, `DeployOptions`, `NgModuleData`, `IContainerRuntimeBase`, `TableOfContentsItem`, `EditorService`, `UserSettingsState`, `ListStorageObjectsRequest`, `ICollection`, `CGAffineTransform`, `GovernanceAccountType`, `EventParameter`, `SocketInfo`, `V1Certificate`, `IPost`, `OsdFieldType`, `SearchDevicesCommandInput`, `FunctionDefinition`, `AuthorizationData`, `TokenMarker`, `TunnelRequest`, `Skola24Child`, `MockAirtableInterface`, `CriteriaGroupType`, `SearchTimeoutError`, `StackMode`, `DemoItem`, `OverlayKeyboardDispatcher`, `AttributeWeights`, `Collator`, `QueryExpressionParensContext`, `TypedFragment`, `IValueConverter`, `RequestDetails`, `RecordSourceSelectorProxy`, `UpSetJSSkeletonProps`, `AggsAction`, `MpElement`, `VueAuthOptions`, `UpdateWorkspaceCommandInput`, `CompareType`, `TileKey`, `ResponseOptions`, `SelectionType`, `Level1`, `WindowLocation`, `SurveyElementEditorTabModel`, `DataModels.UserTasks.UserTaskResult`, `ValidationProblem`, `Bills`, `AbstractSqlConnection`, `IExtensionElement`, `IssuePriority`, `SliderProps`, `ColorKind`, `HdRipplePaymentsConfig`, `Clauses`, `CoronaData`, `SubmitTexture`, `RangeData`, `FieldMappingSpec`, `SecurityManager`, `SpacesPlugin`, `RelativeRect`, `TChunk`, `DecodedResult`, `SelectorsSource`, `IChatItemsState`, `AudioState`, `AuthTokenEntity`, `HttpResponseException`, `PluginDeployerResolverContext`, `MemberNames`, `Mob`, `PasswordHistoryResponse`, `SSOAdmin`, `d.ScreenshotDiff`, `Vc2cOptions`, `BundleDataService`, `requests.ListDedicatedVmHostInstanceShapesRequest`, `CustomHtmlDivFormatter`, `FolderId`, `ChartHighlightedElements`, `AzureClusterProvider`, `PopulateOptions`, `DevOpsAccount`, `SystemLayout`, `WordStyle`, `JSONSchema6`, `Icons`, `t_b1f05ae8`, `BITBOXCli`, `ChannelMessageList`, `Scheduler`, `ContentRecord`, `POIDisputeAttributes`, `NavigationIndicatorCriteria`, `PROVIDER`, `PluginContext`, `ReflectedValue`, `Shrewd.IDecoratorOptions`, `ModifierKeys`, `MyDefaultThunkResult`, `FetchAPI`, `IteratorOptions`, `QueryRenderData`, `EventInit`, `TSClientOptions`, `CallbackType`, `MerchantGoodsSkuEntity`, `CheckOriginConflictsParams`, `StoryListener`, `allContracts`, `AssembledReportGraphics`, `TESubscr`, `SurveyCreator`, `PgClient`, `SqlToolsServiceClient`, `EntityOperators`, `objType`, `GdalCommand`, `r`, `DataEvent`, `ShapeData`, `moment.Moment`, `ResultMeta`, `SampleExtractionResult`, `BluetoothRemoteGATTService`, `VideoQualitySettings`, `PNGWithMetadata`, `PrintableArea`, `ParameterMap`, `ExtendedKeyboardEvent`, `ExpressContext`, `ValidCredential`, `AppMetadata`, `TestFunctionImportSharedEntityReturnTypeCollectionParameters`, `DeprecatedHeaderThemes`, `Indices`, `ChainIdLike`, `JTDSchemaType`, `Rules`, `enet.IConnectOptions`, `HdLitecoinPayments`, `MessageSpecification`, `configuration.Data`, `TileDisplacementMap`, `ResilienceOptions`, `RemoteUpdateListener`, `RouteDryMatch`, `ScheduledDomain`, `ResourceSystem`, `SpectatorService`, `NodeDisplayData`, `IKeyIterator`, `PreferredContext`, `TypeGenerator`, `DataViewRow`, `AnyPersistedResource`, `BrickRenderOptionsResolved`, `WebSocketLike`, `DistinctValuesRequestOptions`, `StructProp`, `ActionGameState`, `PlaceholderProps`, `lgQuery`, `PrivateStyle`, `SoftwareModel`, `LegacyObjectToConfigAdapter`, `LinkOptions`, `SchemaContext`, `IncompleteTreeNode`, `GX.TevOp`, `DiffPanel`, `StringType`, `FailedJob`, `DescribeModelCommandInput`, `LookUpResult`, `Zerg`, `Secp256k1`, `ServerKeyExchange`, `AnyNode`, `PowerlevelCCSet`, `VisualizeAppState`, `UnitRecord`, `IConsul`, `TSupportedFaction`, `WebRequest`, `ByteMatrix`, `ConnectionBackend`, `Viewer.ViewerRenderInput`, `grpc.CallOptions`, `UnwrapRowsComputed`, `RecursiveAnnotation`, `d.PrerenderManager`, `THREE.Event`, `Instantiable`, `MessageHandler`, `ModuleImport`, `Rule.Node`, `Vendor`, `ComponentDescriptor`, `IERC20`, `MarkdownRenderer`, `TreeDataProvider`, `CodeRepository`, `DialogRef`, `IGherkinStreamOptions`, `EntityMetadataMap`, `NetlifyConfig`, `PostgrestResponse`, `InterfaceServerResponse`, `WechatMaterialEntity`, `ICharacterData`, `WebviewWidget`, `React.EffectCallback`, `fhir.Bundle`, `IChart`, `PanGesture`, `IReadOnlyFunctionCallArgumentCollection`, `PromiseCollection`, `TokenCredentialsBase`, `PluginState`, `OpenSearchDashboardsConfig`, `TestWalker`, `SvgProps`, `ThermocyclerModuleState`, `ODataBatchRequestBuilder`, `FullLocaleData`, `PipelinesGraphics`, `DangerDSLJSONType`, `ItemResponse`, `UpdateConfigurationDetails`, `IISystemProto`, `TileDataSourceOptions`, `PipelinesService`, `GetBalanceActivityOptions`, `GuidGenerator`, `PuzzleAction`, `RetryConfiguration`, `CalderaElement`, `Vulnerability`, `X12Transaction`, `FileSystemConfig`, `Ray3d`, `TextureDescriptor`, `WorkerRequestEntry`, `GetUserResponse`, `net.Endpoint`, `eris.Client`, `IZoweDatasetTreeNode`, `TextTransformType`, `ConditionExpression`, `ListChannelsRequest`, `TableQuery`, `IdentifierAttribute`, `StringDictionary`, `MusicbrainzArtist`, `CreateGroupCommandInput`, `WildcardIndex`, `WorkspaceFileWatcher`, `InvalidOperationException`, `CalendarApi`, `TaskInProgress`, `CreateRuleGroupCommandInput`, `ScanRunResultResponse`, `NoteData`, `InsertPosition`, `WrongDependencies`, `ConfigurationChangeEvent`, `PeerSet`, `GCPubSubServer`, `MissionElement`, `TypeParser`, `ISpan`, `LogSeriesFragmentPushRequest`, `PartyDataSend`, `ConnectionManagerState`, `MP4Box`, `ByteVector`, `TextStringLiteralContext`, `STFilterComponent`, `NextConnect`, `OrganizationUserBulkRequest`, `TiledObjectGroup`, `ThyTreeNodeCheckState`, `requests.ListAgreementsRequest`, `AreaGeometry`, `ITelemetryBaseLogger`, `IAuthStatus`, `GroupModel`, `NumberConfig`, `ObjectTypeComposer`, `Argon.SessionPort`, `ImageUpdate`, `TranslationPartialState`, `AnyIterableIterator`, `ng.IDirective`, `ICommandBus`, `DockerContainer`, `UnionTypeDefinitionNode`, `GestureState`, `CdkTree`, `DirtyDiff`, `PriceAxisViewRendererOptions`, `DiffState`, `rootState`, `ARAddModelOptions`, `ClientAndExploreCached`, `HasJSDoc`, `FileVersionSpec`, `AggTypeFilter`, `SourceGroup`, `LogoState`, `RestGitService`, `FieldSpec`, `Atom`, `AllowAction`, `HAP`, `DecoratorOption`, `Ch`, `OrderRepository`, `LastfmArtistShort`, `DeleteInputCommandInput`, `YAnnotation`, `ModifyReadResponseFnMap`, `PLSQLCursorInfos`, `FloatKeyframe`, `FieldDefinition`, `AppendBlobClient`, `CreateDatasetResponse`, `StoreService`, `EntityBuilderType`, `SplitInfo`, `VectorLayerDescriptor`, `DescribeServiceUpdatesCommandInput`, `IMiddlewareClass`, `XNumber`, `requests.ListVaultReplicasRequest`, `Chromosome`, `ListBuffer`, `EliminationBracket`, `ToolTipProps`, `StacksMainnet`, `SharedKey`, `FirebaseProject`, `ApplicationStub`, `ListExecutionsCommandInput`, `GovernElement`, `Http3FrameType`, `SubmissionSectionError`, `FontVariant`, `UserSessionService`, `MagitRepository`, `PreparationTool`, `DisplayValueMapService`, `DefaultVideoTransformDeviceObserver`, `EntityMetadata`, `Crdp.Runtime.ConsoleAPICalledEvent`, `SelEnv`, `NoteCacheItem`, `ISerializer`, `IAmazonServerGroupView`, `protocol.Request`, `PickleStep`, `SelectorArray`, `Containers`, `ITokenObject`, `ValueFormatter`, `AlertsByName`, `FilePickerProps`, `SpineBone`, `S3Configuration`, `ModuleRef`, `ReportingNotifierStreamHandler`, `OutlineSurveys`, `IAreaData`, `RawSavedDashboardPanel610`, `CreateCampaignCommandInput`, `AssetVersion`, `ArenaFormatings`, `WebMessageRawPayload`, `StateBase`, `BitExprContext`, `WorkerService`, `PerspectiveTransform`, `IntervalHistogram`, `SpaceBonus.TITANIUM`, `BindingPattern`, `native.Array`, `ListExportsCommandInput`, `IUtxo`, `IHashMapGeneric`, `WriteBufferToItemsOptions`, `MoveSeq`, `PostgreSQL`, `DaffPaypalTokenResponseFactory`, `ReduxActions.Action`, `ParseResult`, `VisType`, `IGBCoreService`, `DeleteStageCommandInput`, `TSError`, `TimeTrackerService`, `BlockchainCode`, `SVGTemplateResult`, `SDKVersion`, `IConsole`, `WorkflowMapper`, `UpdateRecorder`, `FormGroupDirective`, `NewPackagePolicyInputStream`, `DeviceTracker`, `FuseResult`, `ViewOptions`, `SkillGaussian`, `ClientBase`, `ILoaderIncludePipe`, `colorModule.Color`, `TestUser`, `EllipsoidPatch`, `CourseComponent`, `StoreApi`, `TileCacheId`, `LineString3d`, `TFileOrSketchPartChange`, `ResourcesModel`, `ProgramAccount`, `RenderCompleteListener`, `EditRepositoryCommand`, `DisassociateServiceRoleFromAccountCommandInput`, `THREE.SkinnedMesh`, `DailyRate`, `NAVObject`, `PackageManagers`, `Tx.Info`, `GraphReceipt`, `ContainerState`, `CoreIndexFile`, `ComponentResolverService`, `MemberDefinition`, `InstanceTarget`, `HitDatabaseMap`, `AudioService`, `PropertyChangedEvent`, `BitSource`, `HeapObject`, `ToolGroup`, `ConfigValue`, `Generation`, `TestBackendTimer`, `UniqueIdGenerator`, `CameraConfig`, `ThyNotifyService`, `InvalidParameterValueException`, `AllureRuntime`, `MapLayer`, `EventHandlers`, `Extended`, `VisTypeListEntry`, `TInjectableOptions`, `OpenDialogOptions`, `MDCFloatingLabelAdapter`, `ImageBitmap`, `LocationCalculatorForHtml`, `FrontCardAppearanceShort`, `IntersectionState`, `FileSystemHost`, `PragmaDirectiveContext`, `requests.GetDomainRecordsRequest`, `IHistoryItem`, `OnRefreshProps`, `ConversationService`, `VersionMismatchFinder`, `StyleSheet`, `Path5`, `CanvasPattern`, `EncodeApiReturnOutput`, `GPGPUProgram`, `StatisticsSetType`, `IFluidDependencySynthesizer`, `TalkSession`, `PutAccountSendingAttributesCommandInput`, `GfxPlatformWebGL2Config`, `V1Deployment`, `IBreakpoint`, `LimitedTypedData`, `ReactPortal`, `ListState`, `PackageFailures`, `ITagMatchInfo`, `JsonaAnnotation`, `BlobClient`, `AdapterConstructor`, `NVMDescriptor`, `FrescoDrawee`, `GraphQLNamedType`, `PlayerInfo`, `NodeKeyType`, `ComponentPath`, `SchemaService`, `DateBodyRow`, `GetBucketTaggingCommandInput`, `RivenProperty`, `ParsedLineType`, `AuthResponse`, `FirebaseObjectObservable`, `AnimationNodeContent`, `RobotCard`, `NVM3Objects`, `... 23 more ...`, `ServiceBuilder`, `RuleAction`, `ReaderOptions`, `IColumn`, `MockedFunction`, `StampinoRender`, `QueryOption`, `NodeSourceOption`, `Swap`, `AuthenticateResultModel`, `IProjectConf`, `PolygonGeometry`, `PickItem`, `HostElement`, `ModuleDeclaration`, `SeriesDomainsAndData`, `DispatcherPayloadMetaImpl`, `UsedHashOnion`, `CoinbasePro`, `HookOptions`, `TimePickerModel`, `TheDestinedLambdaStack`, `IReactionPublic`, `CompileOptions`, `SimpleAllocationOutcome`, `IConcatFile`, `SearchIndex`, `TypeConverter`, `SymbolScope`, `SmsProvider`, `PoolTaskDataService`, `FieldHook`, `DataSource`, `FetchedIndexPattern`, `IntPretty`, `IntegratedSpiral3d`, `AsyncFnReturn`, `ValidatorModel`, `Molecule`, `BundleOptions`, `AlyvixApiService`, `AutocompleteFieldState`, `StackOperationStep`, `StellarBalanceMonitor`, `bool`, `dom5.Node`, `FilterLabelProps`, `CameraCullInfo`, `requests.ListVirtualCircuitBandwidthShapesRequest`, `SqrlParseErrorOptions`, `matter.GrayMatterFile`, `StyleExpression`, `IFileUnit`, `HeroById`, `core.JSCodeshift`, `BrewView`, `TType`, `MessageArg`, `Masking`, `UpdateProjectResponse`, `SuperTest`, `NonFungibleAssetProvider`, `IReactionDisposer`, `IMinemeldPrototypeService`, `ExecutionContainer`, `WorldComponent`, `RenderTreeFrameReader`, `Spacing`, `CFMLEngine`, `TranspileOutput`, `LayerConfigJson`, `CreateAndTransferTransition`, `ResourceNode`, `PvsVersionDescriptor`, `HubServer`, `RootContainer`, `Structures`, `FilterService`, `TagSpecification`, `ForkOptions`, `MsgToWorker`, `NodeSorter`, `TestRunnerAdapter`, `MinifyOutput`, `CharacteristicGetCallback`, `CustomPropertyGetUsage`, `ValueStream`, `ReadAndParseBlob`, `FunctionConfiguration`, `requests.ListTransferJobsRequest`, `IAllAppDefinitions`, `ResolverMap`, `TensorTracker`, `FirebaseUserModel`, `AccountSettings`, `FetchPolicy`, `InvalidStateException`, `GroupingService`, `DebugProtocol.StepInResponse`, `PoolFactory`, `Ctor`, `Nibbles`, `ExportContext`, `SystemIconStyles`, `FirestoreError`, `AppDeepLink`, `ContractNode`, `LedgerRequest`, `SubqueryRepo`, `BasisCompressionTypeEnum`, `URLBuilder`, `MdcRadio`, `CommsRecord`, `IPCMessages.TMessage`, `ListCardConfig`, `React.ComponentClass`, `ColumnObjectType`, `SerializedSavedQueryAttributes`, `IUserNote`, `BinaryBuffer`, `TemplateDocument`, `DatasetTree`, `IDocumentContext`, `MVideoFile`, `Magic`, `StencilOp`, `StatsModule`, `NbDialogService`, `Notations`, `DataView`, `GQtyClient`, `ElementState`, `LayoutMaterial`, `NzI18nInterface`, `CreateArg`, `FacetsGroup`, `ErrorCodes`, `SearchSourceDependencies`, `SetupDependencies`, `AccountingTemplate`, `Counter`, `TypeIR`, `LoginUri`, `ReadModelInterface`, `ItemStorageType`, `TEChild`, `TableBatchOperation`, `CompiledRuleDefinition`, `BanesAndBoonsInfo`, `SpyInstance`, `GitJSONDSL`, `QueryTimestampsCommandInput`, `API.storage.api.ChangeDict`, `FileAccessor`, `FolderComponent`, `DependencyMapEntry`, `RunResult`, `ApiResourceReference`, `SupportCodeExecutor`, `JsonRpcClient`, `IFormSectionGroup`, `DeviceLog`, `EzModel`, `SimulatorDevice`, `GoEngineConfig`, `ManagementSection`, `Hello`, `ISPHttpClientOptions`, `http.IncomingMessage`, `IXYOperator`, `DataPublicPluginStart`, `AuthHandler`, `InteractionProps`, `AssertNode`, `MessageResponse`, `LicenseInfo`, `RX.Types.ReactNode`, `LogDomain`, `OwnerService`, `IEntityKeyType`, `GlobalAveragePooling2D`, `QueryShortChannelIdsMessage`, `SampleProduct`, `Ripemd160PolyfillDigest`, `ISqlEditorTabState`, `PreloadedQuery`, `SummaryCollection`, `ApiValue`, `DescribeScalingActivitiesCommandInput`, `ReadonlyObjectDeep`, `UserProvided`, `TestSuiteNode`, `ButtonData`, `CompareFn`, `TypistOptions`, `DiscordBridgeConfigAuth`, `ModalInitialState`, `TSAssign`, `DynamodbMetricChange`, `Mask`, `IConnectionOptions`, `CreateClusterCommandInput`, `SearchBarProps`, `express.Request`, `IInstanceDefinition`, `TableHeader`, `RenderHookResult`, `TensorList`, `AppThunk`, `CreateApplicationCommandOutput`, `Cls`, `CallClientState`, `MenuModelRegistry`, `EventArguments`, `MutableQuaternion`, `DebugProtocol.ScopesResponse`, `PeerId`, `GeneratedFont`, `Download`, `AddressVersion`, `JSMs.XPCOMUtils`, `PropertyInfo`, `VNodeQueue`, `Survey.SurveyModel`, `IterableDiffers`, `LernaPackage`, `BumpType`, `InjectFlags`, `StaticConfigParsed`, `ListActionsCommandInput`, `ICompilerResult`, `RouteType`, `MiTextConfig`, `OutputLink`, `ShareParams`, `BuilderReturnValue`, `ExtensionReference`, `IIterationSummary`, `DecoratorOptions`, `ValueSuggestionsGetFn`, `ValidationMessage`, `RulesClient`, `NestedValueArray`, `Immutable`, `TapeNode`, `TestState`, `LegacySprite`, `PersistenceHelpers`, `GPUBindGroup`, `TextureLoader`, `FilterOption`, `ValueXY`, `DatabaseFactory`, `CrudService`, `EntityComparisonField`, `IngredientReducerState`, `ContextOptions`, `SlideData`, `CheckboxGroupState`, `FetchGroup`, `SortedMapStructure`, `DyfiService`, `StatePropsOfControl`, `ImmutableSet`, `BinaryOperationNode`, `InsightOptions`, `CursorState`, `HashCode`, `CalendarOptions`, `EightChar`, `QueryProviderRequest`, `BuildFile`, `ErrorController`, `FaunaData`, `Pier`, `CloudFrontRequest`, `sinon.SinonFakeTimers`, `EffectReference`, `PatchListener`, `SettingsStore`, `LoginUriData`, `TokenRequest`, `GetTemplateCommandInput`, `SRoutableElement`, `GlobalVariables`, `PlayerModel`, `ICreateOrgNotificationOptions`, `DeleteDataSetCommandInput`, `Supplier`, `RecordRawData`, `GameEngine`, `Filesystem.FileExistsSync`, `ActJestMoveTimeTo`, `CustomReporterResult`, `ISubscribe`, `LineModel`, `ActionheroLogLevel`, `NoneAction`, `BoxFunction`, `StateChangeListener`, `OclExecutionContext`, `TemplatePortal`, `SizeType`, `ThyUploadFile`, `TParam`, `FunctionN`, `AnalysisMode`, `CouncilProposal`, `SendRequest`, `PostSummary`, `AnimationKeyframeLinear`, `RedisOptions`, `SolanaKeys`, `PathToRegExpOptions`, `EndpointDetails`, `sentry.SentryEvent`, `SliceRequest`, `requests.GetConnectionRequest`, `QueryResult`, `TemplateAst`, `DynamicFormArrayModel`, `RobotApiRequestMeta`, `TransactionStatus`, `TypePredicateNode`, `AndOptions`, `T16`, `NavigationParams`, `PersistStorage`, `IssueProps`, `ResponsiveSpace`, `IChild`, `AStore`, `Reconciler`, `OpticType`, `Intl.NumberFormatOptions`, `Prefab`, `NodeCheckFunc`, `ParamDefinition`, `GeoBounds`, `CosmosdbSqlDatabase`, `Fig.Arg`, `MetadataType`, `SidebarService`, `IAdvancedBoxPlotData`, `Jb2Adapter`, `WebGLMemoryInfo`, `TocLink`, `PSIBoolean`, `IRecord`, `EmitHost`, `BackendErrorLabel`, `IMovie`, `InterfaceEvent`, `UnitAnalyser`, `ChannelId`, `PacketMember`, `NormalizedConfig`, `Directions`, `MnemonicSecret`, `AudioVideoObserver`, `requests.ListAutonomousExadataInfrastructureShapesRequest`, `ContentTypeSchema`, `ScopeOptions`, `AppRouteRecordRaw`, `EChartGraphNode`, `Fixture`, `ILaunchOptions`, `Window.ShellWindow`, `ForStatement`, `CmafEncryption`, `httpProxy`, `ConvaiCheckerComponent`, `IStaticFile`, `ReplicaSet`, `BSplineWrapMode`, `CreateBotCommandInput`, `GetEnvironmentTemplateVersionCommandInput`, `ChatUser`, `PublicParams.Swap`, `ServerConfigResource`, `OperationModel`, `TransformedStringTypeTargets`, `Phaser.Game`, `TinyHooks`, `LogStream`, `NormalizedParams`, `UseTransactionQueryState`, `TokenScanner`, `UniqueKey`, `LanguageIdentifier`, `CategoryLookupTables`, `QueryOrderRequest`, `TextMetrics`, `AESJsonWebKey`, `GfxBlendFactor`, `TestFabricRegistryEntry`, `PresenceHandler`, `BodyElement`, `InvokeMethod`, `SteamTree`, `... 7 more ...`, `Attitude`, `TextMatchPattern`, `CombatAttack`, `SectionList`, `IRootElasticQuery`, `IExplorer`, `LinuxJavaContainerSettings`, `PQLS.Library.TLibraryDefinition`, `WorkspaceOptions`, `RoosterCommandBar`, `UpdateDataSetCommandInput`, `TypeNameContext`, `DescribeExportCommandInput`, `ParsedQueryWithVariables`, `MongoClient`, `NgrxJsonApiStoreResources`, `ContextProps`, `UrlFilter`, `TypeArgumentResult`, `ShapeTreeNode`, `MessengerTypes.BatchItem`, `SpansRepository`, `CheckMode`, `GetRecommendationsCommandInput`, `FetchInit`, `DeployStatus`, `CommandEntry`, `ReadableBYOBStreamOptions`, `ts.TextRange`, `TutorialRuleStatus`, `ProviderStore`, `ExplicitFoldingConfig`, `RequireId`, `Average`, `IDescribeRunner`, `Pt2`, `SymBool`, `UniformState`, `ScriptProcessorNode`, `RgbaColor`, `SlpTokenGraph`, `FluentLogger`, `Providers`, `ValidationFunc`, `ExtrinsicDetails`, `WalletState`, `EclipticCoordinates`, `Emotion`, `BuildArtifact`, `IShareButton`, `UserDataCombined`, `MinimalTransaction`, `BaseLanguageClient`, `DayElement`, `SubscribeMessage`, `MyCombatLogParser`, `PropertyCollection`, `ClassConstructor`, `UpdateFindingsCommandInput`, `DimensionMapping`, `DaffAuthLoginReducerState`, `TextInputType`, `ChromeHelpExtension`, `ChartTooltipItem`, `LaunchTemplate`, `ExternalMaster`, `MutableRefObject`, `ShapePair`, `Row`, `LoginParams`, `IAggregationStrategy`, `ITableDefine`, `CertificateAuthorityTreeItem`, `DomExplorerNode`, `NativeInsertUpdateOptions`, `ObservableLightBox`, `EncryptionConfig`, `v2.WebDAVServer`, `PosAndDir`, `MeasurementKind`, `Autopanner`, `PositionAnimation`, `MnemonicX86`, `StateObject`, `Datasource`, `ListPortfoliosForProductCommandInput`, `THREE.Box3`, `IQueryBuilder`, `QualifiedIdentifierContext`, `CloudWatchMetricChange`, `LabelType`, `DescribeOrganizationCommandInput`, `UserPoolService`, `TravisCi`, `WesterosGameState`, `SubjectDetails`, `IntelRealtimeResponse`, `SidebarButtonProps`, `PropertyDecorator`, `IWebAppWizardContext`, `AST`, `IndyPool`, `CreateRouteCommandInput`, `OperatorFinishEncodeInfo`, `S3Destination`, `LayoutActor`, `restm.RestClient`, `DirectiveProfile`, `OidcState`, `CheckResultBuilder`, `NamedTypeNode`, `SQLite3Types`, `TestThrottler`, `AirSchema`, `DataType`, `ApiErrorParams`, `PDFImage`, `BsModalService`, `FormikProps`, `FnParam`, `NonEmptyList`, `NotifyFunc`, `IDeploymentCenterContext`, `InputBox`, `CB`, `mm.IFormat`, `ISearchOptions`, `ExchangeInstance`, `IGridAddress`, `AstroConfig`, `FlowsenseUpdateTracker`, `ComponentCompilerProperty`, `IProps`, `ModelStore`, `WebGL2DisjointQueryTimerExtension`, `BlobGetPropertiesResponse`, `ElementStyles`, `BreadcrumbOptions`, `VocabularyModel`, `PPTDataType`, `CampaignTimelineBoardViewerChanelsModel`, `RectilinearEdgeRouter`, `RxnPlus`, `Bag`, `Topics`, `Slides`, `AssetsOptions`, `AwsCredentials`, `ISmsProvider`, `Thing`, `Characteristic`, `cormo.Connection`, `DeployBuilderOptions`, `PointsGeometry`, `BLOCK`, `HTMLInputElement`, `ThemeVersion`, `InterpolationCurve3dOptions`, `ChatTab`, `dagre.graphlib.Graph`, `IPreset`, `TAtrulePrelude`, `Highcharts.VennRelationObject`, `SandboxContext`, `ParticipantListItemStyles`, `ICols`, `TypeDescription`, `ResponseErrorAttributes`, `PlaylistEntry`, `DeleteSubnetGroupCommandInput`, `requests.ListAutonomousDbPreviewVersionsRequest`, `XChaCha20Poly1305`, `TodoService`, `mongoose.FilterQuery`, `Events.visible`, `EthereumListener`, `ScopedClusterClient`, `CacheManagerGetOptions`, `DeleteApp`, `FoodModel`, `SingleLayerStringMap`, `FlagInfo`, `DaffSeoNameMetaDefinition`, `DaffCartAddressFactory`, `Elem`, `IModelHostConfiguration`, `SymFloat`, `CommandInteraction`, `IFormikStageConfigInjectedProps`, `IExtendedCommit`, `Normalized`, `NgEnvironment`, `StorageKeys`, `GLsizei`, `ThumbnailProps`, `RootNode`, `Console`, `EqualityFunc`, `Ratio`, `NineZoneStagePanelPaneManagerProps`, `UrlMatchResult`, `ElasticsearchClientConfig`, `EVENT_TYPE`, `MessagingServiceInterface`, `FilterConfig`, `Enzyme.ReactWrapper`, `ToastService`, `IAstItem`, `TypeNode`, `Etcd3`, `SavedObjectsTypeMappingDefinitions`, `RType`, `VisualizeAppStateContainer`, `FactorGradient`, `CoreStart`, `requests.ListMultipartUploadPartsRequest`, `ILinkedNodeWithValue`, `SemanticMeaning`, `EventSubscriptionQuotaExceededFault`, `PreviousSpeakersState`, `TableRowData`, `ChangeStateMap`, `BrandState`, `KDF`, `PerspectiveDetails`, `Http3HeaderFrame`, `IDiscordMessageParserResult`, `HasTagName`, `RequestSpec`, `OrganizationProjectsService`, `AccountEmail`, `ATNConfig`, `Pubnub`, `ILocalConfig`, `TimeOffPolicy`, `IRouterliciousDriverPolicies`, `IExecSyncResult`, `ApplyPath`, `PeerInfo`, `JSDocTypeTag`, `events.Handler`, `FieldFormatsStart`, `ConnectionArgs`, `ChunkIndex`, `LogAnalyticsParameter`, `TableDimension`, `ResolvedEphemeralListType`, `ResolvedCSSBlocksEmberOptions`, `Players`, `NumericValuesResult`, `SlotDoc`, `EventStatus`, `ATTRIBUTE`, `TestFormat`, `ApiRun`, `StackingState`, `DrivelistDrive`, `PinMap`, `NzMessageDataOptions`, `Runnable`, `ListUsersRequest`, `SettingsConfig`, `PreprocIncInfo`, `ViewResources`, `InstanceRejectOnNotFound`, `ILanguageObject`, `React.RefObject`, `SingleYAMLDocument`, `SynthIdentifier`, `LanguageClientOptions`, `AssetType`, `RuleConfigTuple`, `ServiceClient`, `ReactEventHandlers`, `IterableChangeRecord_`, `RTCRtpSender`, `Datatypes`, `SiteConfig`, `TimeWidget`, `IProductOptionGroupTranslatable`, `MockConnection`, `HeadersFunction`, `RegisterConfig`, `ExpectedCompletionEntryObject`, `DerivedGauge`, `CommandClasses`, `IWatchOptions`, `PoiBatch`, `NodeProperties`, `SettingOptions`, `ImGui.DrawVert`, `AbstractProvider`, `Balances`, `DetailsState`, `Door`, `SurveyLogicAction`, `SystemStyleObject`, `InterpolationCurve3d`, `PageButtonProps`, `MessageId`, `ArrayEntry`, `IMagickImage`, `JsonStringifierTransformerContext`, `MessageEnvelope`, `TaskFile`, `ActionsObservable`, `LayoutConfig`, `TypeExpression`, `HMACParams`, `RetryKeycloakAdmin`, `AddApplicationInputCommandInput`, `QueryBidRequest`, `AddressHashMode`, `Mentor`, `ListRunsRequest`, `ReadFileResult`, `ChangePasswordRequest`, `CloudFront`, `i8`, `ProjectionResult`, `ClientModel`, `CreateResult`, `SettingLanguage`, `SortedMap`, `ISelectionData`, `BspSet`, `MetricsConfiguration`, `WebGLRenderTarget`, `StageStore`, `X12QueryResult`, `DijkstraNode`, `TaskData`, `IRealtimeSelect`, `MaterialConfig`, `RebaseEditorContext`, `Pool2DProgram`, `PickFunction`, `JsonDocsPart`, `BaseEdge`, `AllStyleOption`, `ApplicationParameter`, `CfnCondition`, `CaretCoordinates`, `RecycleAccumulator`, `APIVersion`, `GX.IndTexAlphaSel`, `Combine`, `WorkNode`, `Howl`, `FileAnalysisResult`, `HttpManagementPayload`, `IRef`, `IPluginTimes`, `INohmPrefixes`, `PointerInfoPre`, `UserOperation`, `WebAudioInstance`, `MetaDefinition`, `LmdbEnv`, `Weather`, `OperationInfo`, `PlayingCard`, `Group1524199022084`, `TimeGridWrapper`, `CombinedJob`, `MemoryManager`, `SessionEntity`, `NetworkSourcesVirtualSourceList`, `LinkInfo`, `ReadonlySymbolSet`, `TimerEvent`, `BRepGeometryInfo`, `ICommonTagsResult`, `SfdxFalconResultType`, `InterfaceWithExtends`, `TodoFilter`, `esbuild.OnResolveArgs`, `PropertiesService`, `StoreMetaInfo`, `TransformStreamDefaultController`, `CompilerOutput`, `ReadableStreamDefaultReader`, `DidDocumentService`, `UnhashedOrder`, `TransactionVersion`, `Tarefa`, `PadplusRoomInvitationPayload`, `React.SetStateAction`, `Flow`, `IBook`, `TaskDefinitionRegistry`, `IModelReference`, `SemanticNode`, `IResourceRow`, `SpacerProps`, `WorkRoot`, `QueryClient`, `PropsHandler`, `PreProcessor`, `vscode.Disposable`, `SideBarTabModel`, `IncomingHttpHeaders`, `HierarchyCompareInfoJSON`, `StoredEventBatchPointer`, `LoadedVertexDraw`, `messages.DataTable`, `LinariaClassName`, `AnimationController`, `GeoJsonObject`, `unreal.Message`, `ElasticsearchResponse`, `GLTFLoaderExtension`, `Scale`, `Rx.TestScheduler`, `RetryDataReplicationCommandInput`, `MockMessageRequester`, `MaterialAnimationTrack`, `requests.ListAppCatalogListingResourceVersionsRequest`, `Datum`, `RxFormArray`, `ITemplatedBundle`, `FragmentManager`, `tf.NamedTensorMap`, `INativeMetadataCollector`, `ActionTypeBase`, `DAL.DEVICE_HEAP_ERROR`, `Datafeed`, `SearchService`, `StepResult`, `AccountJSON`, `AttrAst`, `CallClientProviderProps`, `TKey2`, `FormattingRequestKind`, `FutureNumber`, `MappedNameValue`, `SearchView`, `KeyFrameLink`, `Transporter`, `TooltipInfo`, `Buffer`, `CreeperPoint`, `EventFnSuccess`, `Split`, `FieldFilterState`, `StunProtocol`, `BrowserTranslateLoader`, `DebugId`, `Objective`, `SyncResultModel`, `RollupOutput`, `OutputData`, `UpSampling2D`, `ThermostatFanModeCCSet`, `IdentifierNode`, `Assign`, `DataMap`, `tflite.TFLiteModel`, `ListBundlesCommandInput`, `MDCFloatingLabelFoundation`, `BuildLog`, `INumbersColumn`, `YAMLWorker`, `requests.ListMultipartUploadsRequest`, `SiemResponseFactory`, `APIGatewayProxyEvent`, `Village`, `IIArticlesState`, `DateHistogramBucketAggDependencies`, `ChannelConstants`, `DialogActions`, `PolyfaceBuilder`, `MinAttrs`, `SimEnt`, `Ord`, `RuleModule`, `ISymbol`, `PlotBandOptions`, `Closure`, `KeywordToken`, `requests.ListServiceGatewaysRequest`, `SiblingGroup`, `SubscriptionNotFoundFault`, `JsonAstKeyValue`, `messages.PickleDocString`, `AppSettings`, `DeleteSchemaCommandInput`, `FakeExecution`, `ImageryCommunicatorService`, `CollapseProps`, `CleanupCallback`, `ArgParser`, `Toppy`, `XAndY`, `CustomBond`, `DiagnosticAndArguments`, `IIntegerRange`, `ImportKind`, `HlsPackage`, `NumberFilterFunction`, `DialogContext`, `BSPSphereActor`, `argsT`, `ElasticsearchConfig`, `CommitTransactionCommandInput`, `CompoundPath`, `HookNextFunction`, `IAzureMapFeature`, `VFile`, `EventAttendance`, `types.MouseData`, `HapiRequest`, `IEmailTemplateSaveInput`, `MetaheroToken`, `EditFn`, `SingleBar`, `CipherObject`, `Joiner`, `MigrateEngineOptions`, `CategoryResult`, `ReaderIO`, `EventUI`, `ZoneDef`, `AgencyApiRequest`, `RectInfo`, `XRPose`, `ComplexPluginOutput`, `MatrixReader`, `FlightDataModel`, `FileTypeResult`, `CKBConfig`, `BTCMarkets.instruments`, `JessParser`, `EmailPayload`, `DeleteInvitationsCommandInput`, `OasVersion`, `serialization.Serializable`, `AdministrationScreenService`, `BillCurrencyUnit`, `CreateRegistryCommandInput`, `OptionsObject`, `EditRepositoryPayload`, `SimpleBinaryKernelImpl`, `AutofillMonitor`, `RxJsonSchema`, `Equality`, `CallIdChangedListener`, `StyleDoc`, `RecurringActivity`, `AsyncTestBedConfig`, `PolicyFromES`, `Fig.ExecuteShellCommandFunction`, `IsInstanceProps`, `OSD_FIELD_TYPES`, `PlaceIndex`, `UpSetJSSkeletonPropsImpl`, `CheckType`, `PuzzleID`, `RegionConfig`, `CompressOptions`, `ClientSideSocket`, `FloatAnimationKeyframeHermite`, `KxxRecord`, `PrimitiveProps`, `FloatBuffer`, `ApolloRequest`, `SignatureEntry`, `GetDeviceRequest`, `Listing`, `Tensor`, `StyleFunction`, `DeleteBuilder`, `JsonDocsDependencyGraph`, `WorkflowContext`, `Filterable`, `PropertyInjectInfoType`, `IJsonResourceInfo`, `BinData`, `VideoChatSession`, `SeriesData`, `AckRange`, `ApolloReactHooks.QueryHookOptions`, `RestApplication`, `RestRequest`, `NumberOptions`, `SGSymbolItem`, `AccountGameCenter`, `OAuth`, `NohmClass`, `DistinctQuestion`, `ExposedThing`, `TooltipItem`, `MemoryEngine`, `f32`, `ResolvedTypeReferenceDirectiveWithFailedLookupLocations`, `SassNumber`, `DataResult`, `HyperScriptHelperFn`, `InputSize`, `List`, `SharePluginSetup`, `ApiRevisionContract`, `Legend`, `m.Recipe`, `DataResolverInputHook`, `api.State`, `VueRouter`, `Memento`, `AnimationNode`, `GuildEmoji`, `THREE.TextureDataType`, `KeySequence`, `DragBehavior`, `SQLParserListener`, `FunctionDataStub`, `AlainConfigService`, `ConsoleFake`, `ProofTest`, `ITuple2`, `After`, `Algebra.GroupNode`, `Apply`, `LoggerTransport`, `ICredentialsState`, `GetByIdOptions`, `DaffOrderItem`, `DonwloadSuccessData`, `AnnotationDimensions`, `IActorDef`, `NexusEnumTypeDef`, `B5`, `Decorators`, `MdcTextField`, `IPropertyPaneField`, `TestInputHandler`, `TestDialogConfig`, `AtomArrowBlockElement`, `IProperties`, `RLANAnimationTrackType`, `EditorProps`, `CryptoKeyPair`, `MCU`, `DescribeEventCategoriesCommandInput`, `IHelper`, `RoundingFn`, `AgChartOptions`, `SCNSceneRenderer`, `PiProperty`, `Protocol.Runtime.RemoteObject`, `DataCardsI18nType`, `MoveEvent`, `MatchList`, `IAssetItem`, `FeedbackDelay`, `CPUTensor`, `ProductV2`, `QueryResolvers`, `UIProposal`, `IntegrationMapService`, `ChainableConnectors`, `BotAnchorPoint`, `L2Data`, `TOutput`, `PackageLock`, `CustomScript`, `GridsterComponentInterface`, `ChangeBuffer`, `ThemeContextValue`, `XjointInfo`, `HashAlgorithm`, `IRegionConfig`, `IProductTypeTranslatable`, `MainSettings`, `ChainGetter`, `WizardComponent`, `ModelJsonSchema`, `PipeFlags`, `UICarouselItemComponent`, `ReflectType`, `RepoState`, `EngineDetails`, `StatementAst`, `IOffset`, `WebSocket`, `EntityCollectionService`, `RawTree`, `ClientAuthCode`, `MortalityService`, `StatementedNode`, `RematchRootState`, `IExpressionLoader`, `RenameFn`, `MaybeAsyncHelpers`, `PasswordPolicy`, `MailboxEvent`, `PostgresTestEntity`, `TagConfiguration`, `IParseProps`, `IndexThresholdAlertParams`, `EIP712Types`, `FilterItem`, `CeramicConfig`, `UpdateManyResponse`, `lsp.Range`, `TableValidator`, `PluginOptionsSchemaArgs`, `CallEndedListener`, `SecureClientQuery`, `DescribeReplicationTaskAssessmentRunsCommandInput`, `DateTimeFormatPart`, `Button`, `ModuleInfo`, `FilterManager`, `NetworkSet`, `CloudFormationResource`, `LoadmoreNode`, `VtxLoader`, `CodeProps`, `FilterCreator`, `IErrorObject`, `ts.IScriptSnapshot`, `btSoftBodyWorldInfo`, `JsxOpeningLikeElement`, `InternalBema`, `float32`, `ProgramArgs`, `AddressBookInstance`, `Point4d`, `PostRepository`, `t.TypeOf`, `CreateTagDto`, `BLE`, `SelectOption`, `RootReducerState`, `QueryInfo`, `WhereExpression`, `Bounds`, `EmailDoc`, `ModalConfig`, `ClassPrototype`, `FunctionBody`, `CashScriptListener`, `ChordNode`, `AzureAccessOpts`, `BleService`, `AttachmentView`, `OrderByClauseArgument`, `StringPart`, `HSL`, `PublicationView`, `Keyring`, `QCProject`, `DescribeChannelBanCommandInput`, `SidebarMenu`, `Scraper`, `PolicyBuilderElement`, `FlexConfigurationPlugin`, `GetProductSearchParams`, `IDocEntryWeight`, `XI18nService`, `SVGTitleElement`, `LocalFraudProof`, `CandyDateType`, `StackSeriesData`, `EventHandlerInfo`, `DataViewMetadataColumn`, `CompType`, `LRU`, `StackNavigationOptions`, `apid.EditManualReserveOption`, `PlaybackStatus`, `TransformFlags`, `ENSService`, `WordCloudDataPoint`, `ECPairInterface`, `StepModel`, `SimpleOrder`, `ScriptAst`, `StableSwap`, `GeometryQuery`, `ChartsPlugin`, `TimerProps`, `DistanceM`, `IconifyAPIQueryParams`, `CLM.UserInput`, `DynamicCstr`, `IAuthZConfig`, `requests.ListCompartmentsRequest`, `ProviderProxy`, `Action`, `InputValidationService`, `FlipperServerImpl`, `CommentService`, `SimpleFrameStatistics`, `ImageOptions`, `MagentoCartFactory`, `PropertiesSource`, `EnvTestContext`, `FolderWithId`, `ModelStoreManagerRegistry`, `AppInstanceProposal`, `SequenceContract`, `DeepPath`, `Todos`, `Accessory`, `d.OutputTargetWww`, `PlayCase`, `BillPayer`, `IAmazonImage`, `Project`, `loaderCallback`, `FadingFeature`, `SFCDescriptor`, `Docker.Container`, `ListOfRanges`, `S2GeometryProvider`, `match`, `GraphAnimateConfig`, `VimState`, `sdk.SpeechTranslationConfig`, `SerializerTypes`, `ServerRequestHandler`, `RenderAtomic`, `ScreenProps`, `NavigationBindings`, `AuthorizationDataService`, `K2`, `PlantMember`, `MyUser`, `OtherArticulation`, `WeaponObj`, `iconType`, `DokiTheme`, `NzTabNavItemDirective`, `WaiterConfiguration`, `CounterFacade`, `TableInsertEntityHeaders`, `objPool.IPool`, `TTransport`, `JsonaObject`, `ForceSourceDeployErrorResponse`, `ClientTools`, `ForOfStatement`, `types.DocumentedType`, `SWRConfigInterface`, `UserIDStatus`, `AlertProps`, `LoadMany`, `AudioWorkletNode`, `RpcKernelBaseConnection`, `IFullProps`, `RequestChunk`, `IFileStat`, `CreditedImage`, `UnidirectionalLinkedTransferAppState`, `CueAndLoop`, `ScenarioEvent`, `SpreadElement`, `DaffCompositeProduct`, `TweetMediaState`, `InternalNode`, `CreateDashboardCommandInput`, `FileTypeEnum`, `HostInstructionsQueue`, `IntegerType`, `OAuthEvent`, `OptionsStackingValue`, `ValidationType`, `RepoSyncState`, `StateInterface`, `Ctx`, `AcrylicConfig`, `PaginationService`, `PreparsedSeq`, `JsonDocsEvent`, `$p_Predicate`, `DynamicStyleSheet`, `Saga`, `GitExtension`, `EncodeInfoDisplayItem`, `RangesCache`, `ClientScopeRepresentation`, `ClusterInfo`, `BaseUI5Node`, `EnvironmentType`, `IAmExportedWithEqual`, `PWAContext`, `VirtualHub`, `IWorkflowData`, `WorkNodePath`, `knex.Transaction`, `SecureChannel`, `oai3.Schema`, `WebSocketChannel`, `turfHelpers.FeatureCollection`, `AccordionStore`, `RequestWithSession`, `SurveyQuestionEditorTabDefinition`, `TextShadowItem`, `EnumProperty`, `ROM`, `HashMapState`, `HTMLAnchorElement`, `IHandler`, `JJunction`, `RnM2TextureInfo`, `BrowserDriver`, `ResolvedInfo`, `HemisphereLight`, `IntegrationInfo`, `GridColumnExtension`, `vscode.WebviewPanel`, `SelfDescribing`, `fhir.Composition`, `DebtTokenContract`, `GuildResolvable`, `IntegrationSettingService`, `JPABaseShapeBlock`, `InjectedProps`, `ImportReplacements`, `FetchResponse`, `RaiseNode`, `TransactionSignature`, `TinyDate`, `MDCCornerTreatment`, `SmartPlayer`, `N4`, `AssociationCCRemove`, `Pool3DProgram`, `SQLRow`, `ImageRequestInfo`, `AstBlock`, `CatExpr`, `LocalEnv`, `ResourcePrincipalAuthenticationDetailsProvider`, `HTMLCanvasElement`, `RangeBasedDocumentSymbol`, `OnPushList`, `BriefcaseDb`, `IRolesMap`, `TraverseContext`, `ChannelResolvable`, `code.Position`, `MultiLanguageBatchInput`, `IQueryOptions`, `GetCanonicalFileName`, `DialogRow`, `BorrowingMutex`, `PluginComponents`, `GetServiceCommandInput`, `MockEntityMapperService`, `SchemaElement`, `GridItem`, `TextureSourceOptions`, `LossOrMetricFn`, `AppStateModel`, `ErrorArea`, `reqType`, `IGetTimeSlotInput`, `SidebarState`, `AnalyticsFromRequests`, `CustomResourceRequest`, `ITenantService`, `IFormState`, `Drawing`, `RootStore`, `SelectItem`, `VoiceOptions`, `Birds`, `SubgraphDeploymentIDIsh`, `AssetUtils`, `ParentGroup`, `BarData`, `PrismService`, `CollisionTree`, `dayjs.Dayjs`, `ChimeSdkWrapper`, `HttpProbe`, `makerjs.IModel`, `RegisterOptions`, `TrieNode`, `StreamDeck`, `VehicleEvent`, `ICitable`, `PieSectorDataItem`, `DateFnsHelper`, `DeleteDatasetCommand`, `SqlTuningTaskSqlExecutionPlanStep`, `ts.FormatCodeOptions`, `ItemStyle`, `StartExportTaskCommandInput`, `NativeEventSubscription`, `EmptyStatement`, `Extent`, `ModelResponse`, `NodeVersion`, `ModifyClusterCommandInput`, `GithubRepo`, `LoansService`, `KanbanBoardState`, `ParseResults`, `GridColumnConfig`, `ResolverContext`, `FrameRequestCallback`, `BoundedGrid3D`, `DePacketizerBase`, `ScopedHandler`, `ExecutionWorker`, `IconButtonGridProps`, `DeleteTransformsRequestSchema`, `IStrapiModelExtended`, `Masset`, `freedom.FreedomInModuleEnv`, `Tunnel`, `GeoJsonProperties`, `Optic`, `textChanges.ChangeTracker`, `RenderTarget`, `SortEvent`, `parser.PddlSyntaxNode`, `CallReturnContext`, `NodeWrap`, `Ellipsoid`, `RecordManager`, `express.RequestHandler`, `PartialCell`, `UnaryOpProgram`, `ScriptingDefinition`, `VehicleCountRow`, `PaymentProvider`, `AZSymbolKind`, `JwtUserData`, `S3Config`, `AuthorReadModel`, `MatChipInputEvent`, `ValueMetadataBoolean`, `RemoveEvent`, `PointerCoordinates`, `CrossMentor`, `SimpleOption`, `AnySpec`, `IPicture`, `MessageHeader`, `EntityDictionary`, `IPromise`, `HttpServerType`, `FP`, `TypeVarType`, `IAnyType`, `AUTWindow`, `A2`, `Node.MinimalTransaction`, `PartitionStyle`, `GetQueryStatus`, `LocIdentifier`, `RouterMenuItem`, `FacetValue`, `AsyncSchema`, `IParticleValueAnimation`, `ShortChannelId`, `CoinbaseKey`, `EngineArgs.ApplyMigrationsInput`, `AnyElt`, `StorableComponent`, `CheckAndApproveResult`, `IRequireMotionAction`, `SectionVM`, `DispatcherPayloadMeta`, `ILinePoint`, `ContainerRef`, `DocTableCell`, `EnvFile`, `CanvasLayer`, `Focus`, `FunctionShape`, `MetaBlock`, `ViewerOut`, `DQAgent`, `PickingInfo`, `SkeletalComponent`, `CheckRunPayload`, `SpotLight`, `Vector2Arrow`, `AnchoredChange`, `IBifrostInstance`, `SendInfo`, `ContextSetImpl`, `BooleanLiteral`, `u8`, `JSONSchema3or4`, `HandlebarsTemplateDelegate`, `DeleteBucketTaggingCommandInput`, `DefineDatas`, `NavBarProps`, `TaskManagerConfig`, `Z64Online_ModelAllocation`, `OS`, `NettuAppRequest`, `ProjectRole`, `PluginObject`, `FSJetpack`, `ExtractModConfig`, `KeycloakService`, `Posts`, `IDevice`, `filterInterface`, `Thread`, `GfxRenderPassP_WebGPU`, `PureSelectorsToSelectors`, `vscode.OnEnterRule`, `IParams`, `LoggerConfig`, `CSSResolve`, `AllureGroup`, `CommonAlertState`, `SearchSourceFields`, `ThyDialogRef`, `HTMLStyleElement`, `ServiceWorkerConfig`, `IServiceIdentifier`, `TableFilterDescriptor`, `DocLinksStart`, `SurveyElementEditorContentModel`, `SaveFileReader`, `WritableStream`, `AttributeMask`, `ISPList`, `ExecutionInfo`, `EnumOptions`, `ETHOption`, `SceneControllerConfigurationCCSet`, `RoleType`, `WhenCause`, `AssetInfo`, `G6Edge`, `Micromerge`, `QueryMwRet`, `StartFrame`, `GridIndex`, `ListRepositoriesCommandInput`, `TIndex`, `AuthSigner`, `UpdateBotCommandInput`, `FabricPointerEvent`, `TemplateType`, `EqualityFn`, `Phase`, `RegisteredServiceSingleSignOnParticipationPolicy`, `BrowsingData.DataTypeSet`, `formValues`, `SavedObjectsDeleteOptions`, `JPiece`, `MDCTopAppBarAdapter`, `ECSqlInsertResult`, `MiddlewareResult`, `RemoteDatabase`, `EncString`, `NotebookCellOutput`, `PostcssStrictThemeConfig`, `EventParams`, `StepConditional`, `WebGLRenderer`, `ConnectOptions`, `ethers.providers.TransactionRequest`, `PUPPET.payloads.Message`, `ListFilesStatResult`, `IBasicSessionWithSubscription`, `Customizable`, `NamedModel`, `AttachPolicyCommandInput`, `CssNode`, `LogAttributes`, `ListView`, `EventListenerRegister`, `ImageGallerySource`, `HeaderObject`, `SerializedTreeViewItem`, `UpdateData`, `ShortValidationErrors`, `pxt.auth.Badge`, `TemplateSource`, `GfxProgramDescriptor`, `SfdxWorkspaceChecker`, `ts.WatchOfConfigFile`, `CBPeripheralWithDelegate`, `FieldAppearanceOptions`, `PropertyLike`, `ReConfigChunk`, `SimpleDate`, `AndroidBinding`, `JSONSourceData`, `NzSliderValue`, `UrlGeneratorContract`, `DisjointSetNode`, `RouteConfig`, `DaffOrderTotal`, `TypeDefinitionNode`, `AggregateMeta`, `Process`, `Sinks`, `MembersInfo`, `ResolvedNative`, `FontName`, `PanelComponent`, `IUserWithGroups`, `EmitFlags`, `StateVariables`, `LanguageConfiguration`, `createAction.Action`, `TBEvent`, `SubEntityProps`, `Revalidator`, `JoinTable`, `WS.MessageEvent`, `CheckerBaseParams`, `timePickerModule.TimePicker`, `IndexFormat`, `CameraUpdateResult`, `DragSourceMonitor`, `CreateSelectorFunction`, `ContractInfo`, `ICXCreateOrder`, `RpcRequestFulfillment`, `IPropertiesAppender`, `CommandLineAction`, `RenderMode`, `ParameterNameValue`, `NetworkScope`, `TimeChangeSource`, `ReplyShortChannelIdsEndMessage`, `GetNetworkProfileCommandInput`, `DependencyName`, `LineIndexSnapshot`, `APIHandler`, `GenerateAsyncIterable`, `TableSuggestion`, `HistoryType`, `ExpectResponseBody`, `AlterTableModifyColumnBuilder`, `RootStoreType`, `BuildOptionsInternal`, `CalculateNodePositionOptions`, `WatcherHelper`, `SessionExpired`, `LocalSession`, `AngularScope`, `NzCarouselContentDirective`, `OnResolveArgs`, `MatchArgsToParamsResult`, `IntrinsicTypeDescriptor`, `TimelineDateProfile`, `EnvSimple`, `LoadedConfigSelectors`, `ReferencePosition`, `LayersTreeItem`, `StateDB`, `RenderTask`, `DBAccessQueryResult`, `Vec2`, `CustomCompletionItem`, `ParserResult`, `AnimationEvent`, `InputButtonCombo`, `SyncOptions`, `Retro`, `OOMemberLookupInfo`, `ClampedMonth`, `ISpace`, `EditorConfig`, `HintFile`, `DeleteSnapshotScheduleCommandInput`, `TConstructor`, `PointerStates`, `SCHEMA`, `EnhancedSelector`, `SignalID`, `RuleTarget`, `CircuitState`, `QueryTuple`, `ListMultipartUploadsRequest`, `AnyCardInGame`, `FetchType`, `ProviderConstructor`, `ConstantExpr`, `RenderItem`, `HsMapService`, `EqualityConstraint`, `DependencyManager`, `AppConfigService`, `Electron.WebContents`, `IHooksGetter`, `MarkdownItNode`, `FormatParams`, `TestRenderTag`, `Flanger`, `AppAndCount`, `DialogState`, `Arc3d`, `DarwinMenuItemConstructorOptions`, `IpfsApi`, `Mismatch`, `ISeries`, `Poller`, `faunadb.Client`, `SlippageTolerance`, `EventDispatcher`, `d.HotModuleReplacement`, `PutAccountDedicatedIpWarmupAttributesCommandInput`, `TranslateResult`, `INodeData`, `IMatch`, `ConnectionCloseFrame`, `NSArray`, `IndexField`, `SelectQueryNode`, `GaugeRangeProperty`, `CreateEventSubscriptionMessage`, `TypeRegistry`, `ts.LineAndCharacter`, `CommandInputParameterModel`, `VisToExpressionAst`, `FractalisService`, `DirectiveMetadata`, `PathFinderPath`, `FileBuild`, `ProductSet`, `PostsService`, `GameName`, `PhrasingContent`, `HtmlContextTypeOptions`, `CompilerFileWatcher`, `SelectorInfo`, `InlineDatasources`, `ActionGroup`, `AdminDatabase`, `ScreenSpaceProjection`, `Electron.BrowserWindow`, `TYPE_AMOUNT`, `Timing`, `ChildrenService`, `MarkdownIt`, `ICreateUserDTO`, `EdaBlankPanelComponent`, `IParseOptions`, `EncounterState`, `CustomQueryState`, `CommitterMap`, `MatchEvent`, `IFilterContext`, `ShaderPass`, `FailedShard`, `ActiveSession`, `UploadProps`, `TranslateOptions`, `IndividualTestInfo`, `Elt`, `CalendarViewType`, `jest.CustomMatcher`, `DidExecutedPayload`, `ComposedPublicDevice`, `ParticleEmitter`, `vscode.InputBoxOptions`, `MaybeCurrency`, `UnboundType`, `T.Refs`, `PlugyPage`, `ITagInputItemProps`, `Animated.CompositeAnimation`, `TimelineItemProps`, `Star`, `CommitChangeService`, `TreeSelectionState`, `GeometryKindSet`, `MenuContext`, `StructPrimitiveType`, `LimitExceededException`, `SuggestionsRequest`, `UpdateFilter`, `StatePropsOfCombinator`, `HsButton`, `JsonRpcRecord`, `BuilderEntry`, `MarkExtensionSpec`, `OperationStatus`, `Dryad`, `BoxKeyPair`, `DemoConfig`, `T.Effect`, `Secrets`, `CompareResult`, `BehaviorMode`, `IPermissionState`, `DynamicClasses`, `CollectorEntity`, `bAsset`, `GrayMatterFile`, `CourseId`, `Tabs.Tab`, `IQueryParam`, `ChunkGroup`, `PaletteType`, `SecretRule`, `Redex`, `STColumnButton`, `OmvFeatureFilterDescription`, `UserStore`, `PolygonEditOptions`, `UncachedNpmInfoClient`, `HybridConnection`, `FragmentSpread`, `SolStateMerkleProof`, `VNodeChildren`, `Downloader`, `IpcEvent`, `DeployFunction`, `ProjectState`, `Factory.Type`, `ListrBaseClassOptions`, `IAssetComponentItem`, `SocketStream`, `bsiChecker.Checker`, `NetMDInterface`, `MigrateDev`, `UseComponent`, `IResponseAction`, `StreamSpecification`, `NetworkConfiguration`, `PageModel`, `LoadingController`, `CalloutContextOptions`, `ethers.utils.Deferrable`, `IClientRegistrarOptions`, `KeymapItem`, `Mocha.MochaOptions`, `CombatService`, `React.TouchEvent`, `FieldConfig`, `evt_exec`, `WebrtcConn`, `NormalBold`, `hubCommon.IModel`, `ResponderConfiguration`, `FoldingRangeParams`, `EditorSuggestionPlugin`, `ImageUse`, `todo`, `ScriptBuilder`, `IFluidResolvedUrl`, `SolverT`, `IonRouter`, `CkElementProps`, `ActionSheetController`, `MockState`, `PartialTheme`, `CompositeDisposable`, `FragmentType`, `RecoilTaskInterface`, `VM`, `ConnectionsManagerService`, `RectDelta`, `P2`, `V1WorkflowInputParameterModel`, `AccountEmail_VarsEntry`, `ModelMesh`, `d.Encapsulation`, `MetadataRecord`, `Contour`, `PointCloudOctreeNode`, `Vec3Term`, `A7`, `CompilerBuildStart`, `ReadWriteStream`, `Seeder`, `ArgStmtDecl`, `FlatList`, `DependencySpecifier`, `SiteTreeItem`, `GfxBufferFrequencyHint`, `QueueObject`, `PrimaryKeyType`, `Conv2DProgram`, `ValueAccessor`, `AccountAttribute`, `EndPointService`, `IpAddressWithSubnetMask`, `Effector`, `Bot`, `IFeed`, `TreeViewNode`, `GitBlame`, `DialogService`, `WorkRequestResourceMetadataKey`, `TableData`, `CollectionContext`, `IndexState`, `RootValue`, `NuxtAxiosInstance`, `LogRequest`, `Aes256Key`, `DeleteListenerCommandInput`, `ProviderToken`, `ContractFactory`, `ChainService`, `SelectedScriptStub`, `LogContext`, `DropdownMenuInitialState`, `ImmutablePeriod`, `ListParticipantsResponse`, `VMLElement`, `Nav`, `TreeNodeType`, `t_63513dcd`, `RemoteUser`, `CodeRange`, `BerryOrm`, `DescribeImagesCommandInput`, `IOtherExpectation`, `AutofillField`, `Todo`, `location.CloudLocationOption`, `ServiceInfo`, `PadplusContactPayload`, `UI`, `CRDTObject`, `LengthParams`, `IMessageMetadata`, `WaitImageOptions`, `UnsupportedOperationException`, `ResponseFactory`, `MaterialCache`, `UICollectionViewLayout`, `JsonFile`, `browser.tabs.Tab`, `MetricDimension`, `OpDescription`, `ListingModel`, `unified.Processor`, `TransformPivotConfig`, `Pubkey`, `WsChartService`, `ActiveMigrations`, `RtmpResult`, `ColorPickerEventListener`, `d.JsonDocsMethod`, `VpnGateway`, `FunctionProps`, `DataFormat`, `Teams`, `CodeActionKind`, `FileExtensionMap`, `Vertices`, `ListrEvent`, `LoadingOptions`, `BVEmitter`, `DigitalNode`, `SortService`, `SemanticTree`, `ShapeDef`, `PersistedState`, `RawPackages`, `Insets`, `TransformBaseline`, `ModelHandle`, `ZoneSpec`, `TestHandler`, `MaybeArray`, `Key`, `DataSourceParameters`, `CodeFlowAnalyzer`, `Dropout`, `Contents`, `ApiMethodScheme`, `SymInt`, `ConfigurationCCGet`, `ts.DocumentRegistry`, `PathParser`, `Myth`, `SingularReaderSelector`, `Step`, `ClockOptions`, `WorldLight`, `Redux.Reducer`, `TimingInfo`, `TabOption`, `IpcResponse`, `ListJobsCommandOutput`, `ComputationCache`, `CertificateAndPrivateKeyPair`, `ScaleObject`, `FullUser`, `PreferenceStateModel`, `MessageValue`, `OpenAPIV3.SchemaObject`, `NestedPageMetadata`, `BadgeStyleProps`, `ColumnSeriesDataItem`, `IHttpRes`, `MaterialColor`, `RebaseEntry`, `ILine`, `Sblendid`, `DateUtilsAdapter`, `SwalOptions`, `CalculationId`, `Radius`, `Footnote`, `ISurveyStatus`, `FIRDatabaseReference`, `WordOptions`, `WithCondition`, `SectionsType`, `ISolutionService`, `CreepSetup`, `GraphicMode`, `CoordinatesObject`, `ILinkInfo`, `XPCOM.nsIHttpChannel`, `RecognitionException`, `StringOrNumber`, `Defs.CompactdState`, `AxesTicksDimensions`, `MapFunction`, `gameObject.Fish`, `SignedBy`, `EchPalette`, `DMMF.Mappings`, `SpawnSyncReturns`, `DeeplinkParts`, `internal`, `DriveItemData`, `JID`, `MediaListOptions`, `IDataProvider`, `ComponentMap`, `ProductType`, `IDirectoryModel`, `InputValueDefinitionNode`, `BabelFileResult`, `QAction`, `SendAction`, `TasksStoreService`, `Roles`, `DiffHunk`, `IFileStore`, `ArrowFunction`, `LinkRecordType`, `DescribeTagsCommandInput`, `MatMenuPanel`, `SearchModeDescription`, `TelemetrySender`, `IPublisher`, `VanessaEditor`, `CkbTxGenerator`, `vscode.TextDocumentContentChangeEvent`, `OptionsWithUrl`, `MultiFn1O`, `PutPermissionCommandInput`, `AwsClientProps`, `WEBGL_debug_renderer_info`, `GeneralActionType`, `RootProps`, `CommandLineArgs`, `Observer`, `Performance`, `RowOfAny`, `IAPIRepository`, `DDL2.Schema`, `WithGenericsSubInterface`, `MonthOrYearComponents`, `IDesignLike`, `DemoAppAction`, `ARGS`, `IPC.IFilePickerFileInfo`, `HapService`, `InputObject`, `GetTableRowsResult`, `DOMOutputSpec`, `FrameBase`, `ConvertedType`, `MetricCollection`, `GravityInfo`, `Uint16Array`, `AppearanceProviderFor`, `LightData`, `AuthoringWorkspaceService`, `AnalyticsProvider`, `ValidationContext`, `RequestCredentials`, `OnPreResponseHandler`, `DisclosureStateReturn`, `IdentifyOperation`, `GetProjectCommandInput`, `GridLayout`, `WorkerInterface`, `PathPredicate`, `PositionedTickValue`, `ProviderIndex`, `UseQuery`, `JKRArchive`, `ZeroXOrders`, `ILabShell`, `next.Sketch`, `ClientChangeList`, `IGif`, `BotAdapter`, `BitbucketPrEntity`, `TextElementGroup`, `requests.ListCaptchasRequest`, `Scheme`, `TheEventbridgeEtlStack`, `IBatteryEntityConfig`, `ISourceLocation`, `IFeatureCommand`, `CallbackDataParams`, `PrismaService`, `ZoneManagerProps`, `FileObject`, `HttpAuthenticatedConnection`, `FunctionAppService`, `SceneControllerConfigurationCCGet`, `ElementMetadata`, `SpatialDropout1DLayerConfig`, `DaffStatefulCartItem`, `CompiledHierarchyEntry`, `TestDispatcher`, `page`, `ParameterExpression`, `PIXI.interaction.InteractionEvent`, `LeafletEvent`, `Driver`, `ExtraContext`, `BoomTheme`, `CachedVoiceState`, `FactoryOptions`, `SavedObjectsExportablePredicate`, `RemoteCallParticipants`, `ItemPredicate`, `PopupType`, `GfxBindings`, `RulesModel`, `KsDiagnostic`, `IndexedNode`, `ExpressionAttributeValueMap`, `SortOption`, `GetState`, `AngularFirestore`, `ShapeView`, `RepositoryStatisticsReadModel`, `SoftwareTransaction`, `IPlayable`, `DAL.DEVICE_ID_SYSTEM_DAC`, `EmbeddableStateTransfer`, `Elements`, `Invalidator`, `VariableStatementStructure`, `SearchSessionsConfig`, `EventManagerConfig`, `RTCTrackEvent`, `ArenaCursor`, `PgClass`, `UpdateParameters`, `ItemData`, `PersonalAccessTokenCredentialHandler`, `Prefix`, `DescribeUsersCommandInput`, `Recognizer`, `TrackerConfig`, `TokenSharedQueueResult`, `StreamingClient`, `Letter`, `ByteReader`, `UmlNotation`, `RecommendationType`, `Screenview`, `CanvasEvent`, `MimeType`, `OcticonSymbol`, `ObjectPool`, `StoreManager`, `In`, `ParsedSelector`, `StatedBeanContextValue`, `Placeholder`, `EventActionHandlerMutationActionCallable`, `AccordionItemComponent`, `AnyExpressionRenderDefinition`, `SignalingClient`, `IListItem`, `ScalarMap`, `CalcScaleAnmType`, `BigComplex`, `Uniform`, `NestedResource`, `TestFailure`, `MatTab`, `EIP712Domain`, `DisassociateMemberCommandInput`, `UpdateDatasetEntriesCommandInput`, `StaticBlog`, `IIconProps`, `EventAsReturnType`, `DataAnalyzeStore`, `d.PrerenderResults`, `cxapi.Environment`, `DoClass`, `MapStateProps`, `DragCheckProps`, `TruncatedNormalArgs`, `ReactDivMouseEvent`, `RegularNode`, `T.Matcher`, `AbstractUIClass`, `lex.Token`, `RestServer`, `BuildingEntity`, `DescribeWorkspaceDirectoriesCommandInput`, `EnumLiteralType`, `CreateDBInstanceCommandInput`, `ITaskConfig`, `SourceCode`, `StyledIconProps`, `JwtPair`, `OaiToOai3FileInput`, `StoreValue`, `ExtendedKeyInfo`, `IAmazonFunctionUpsertCommand`, `TMeta`, `TableEntityResultPage`, `ShapeInfo`, `FC`, `IEsSearchResponse`, `HitCircleVerdict`, `DescribeDBClustersCommandInput`, `STATE`, `Defines`, `TileSetAssetPub`, `PhotosaicImage`, `INetEventHandler`, `core.DescribePath`, `DeleteRoomCommandInput`, `DkrTextureCache`, `Initializer`, `ReplicaOnPartition`, `JSONChunk`, `ModelViewer`, `OrbitControl`, `GraphImmut`, `CanvasRenderingContext2D`, `CoinSelectOptions`, `ErrorReason`, `PluginsConfig`, `WebGLProgram`, `IChunkHeader64`, `MatCheckbox`, `SaveEntitiesSuccess`, `EntityComparator`, `CsmPublishingCredentialsPoliciesEntity`, `DownloadItem`, `requests.DeleteJobRequest`, `ArrowProps`, `ParsedPacket`, `GenericGFPoly`, `TimeFormat`, `GAMEOBJECT_SIGN`, `InteractionType`, `OutputError`, `ReferenceRecord`, `PaginationState`, `MysqlError`, `ZRRawEvent`, `NumberLiteralContext`, `SFATextureArray`, `ECDSASignature`, `RustError`, `EncArrayBuffer`, `CreditCardEscrow`, `core.IProducer`, `TEntry`, `CodelistRow`, `AcceptChannelMessage`, `Realm.ObjectSchemaProperty`, `FeeLevel`, `DataFrameAnalyticsConfig`, `DaffCategoryIdRequest`, `SpatialViewState`, `NdQtNode`, `EventTarget`, `IEventContext`, `SearchByIdRequest`, `NaotuConfig`, `ExactPackage`, `ResponsiveColumnSizes`, `UpdateQueue`, `DeleteChannelMembershipCommandInput`, `CardFooterProps`, `WindowRefService`, `CheckpointProps`, `ElementHandleForTag`, `CollectionState`, `FastTag`, `AndroidProjectConfig`, `CallAdapter`, `d.CompilerBuildStart`, `NotificationHandler`, `requests.ListHealthChecksVantagePointsRequest`, `Type_Struct`, `GraphQLSchema`, `DialogBase`, `RouteContext`, `GestureEvent`, `ObjectBindingPattern`, `QualifiedName`, `Semigroupoid2`, `HandlerExecutionContext`, `StylesProps`, `ElementCore`, `LuaComment`, `ErrorItem`, `AccountDevice`, `IBudgieNode`, `BooruCredentials`, `IGameMessage`, `EstreeNode`, `AuctionView`, `GetAccountInfoRequest`, `EncryptedWalletHandler`, `CannonPhysicsComponent`, `ListApplicationsResponse`, `ESLMediaRule`, `GraphRbacManagementClient`, `ThermostatMode`, `WindowInfo`, `OpenSearchDashboardsRequest`, `DocString`, `TextureDataType`, `UserCredential`, `FIRDataSnapshot`, `GoEngineState`, `InstanceProps`, `restify.Response`, `CloudSchedulerClient`, `EntityDbMetadata`, `Parslet`, `ClaimDTO`, `GanttViewDate`, `IDotEnv`, `ConfigStructShape`, `React.ReactPortal`, `GraphQLNamedOutputType`, `FlattenInterpolation`, `MicrosoftComputeExtensionsVirtualMachinesExtensionsProperties`, `LLVMContext`, `IL10nsStrings`, `MapGroup`, `JQuery.Event`, `CatsService`, `KeyValueChangeRecord_`, `ServerItem`, `DSpaceObject`, `IDatabaseDataSource`, `CustomRule`, `StepsProps`, `AxiosInstance`, `ParticleArgs`, `DeleteDomainCommandInput`, `GetColumnWidthFn`, `TruncatableService`, `GtkElement`, `NumberOperands`, `PackageUrlResolver`, `requests.ListUsersRequest`, `GfxSamplerP_WebGPU`, `JSONProtocol`, `OrderPair`, `ISeedPhraseStore`, `PIXI.Text`, `PatternMappingNode`, `StepDetailsExposedState`, `TinyDateType`, `EntityType`, `ShallowRenderer`, `TypeSet`, `UpdateApplicationCommandOutput`, `EntityTypeDecl`, `BusInstance`, `FetchFunction`, `NetworkManagementClient`, `BrowserFeatureKey`, `IpcRenderer`, `WsProvider`, `ITestBillingGroup`, `Shared.SubscriberFactory`, `TSTypeLiteral`, `RneFunctionComponent`, `BlockFriendsRequest`, `POISearchParams`, `SelectionArea`, `QueryListProps`, `L2Creature`, `DiagnosticInfo`, `CreateClusterCommand`, `PrinterType`, `MockTextChannel`, `MessageStatusService`, `ApplicationTheme`, `TitleCollection`, `Persist`, `RenderStatistics`, `Attempt`, `VersionComponent`, `EndpointConfig`, `MerchantGameWinningEntity`, `Shared.TokenRange`, `UnitsImpl`, `MerchantMenuOrderEntity`, `ScatterProgram`, `OutputChannelLogger`, `DetectorCallback`, `PickPoint`, `DemandDTO`, `HTMLImageSource`, `IConnectionFormSubmitData`, `SModelIndex`, `Acc`, `PredicateType`, `DataPacket`, `TermAggregationOptions`, `NotifierService`, `AlertTableItem`, `ElementCreationOptions`, `GetDedicatedIpCommandInput`, `requests.ListPackagesRequest`, `ArgumentTypes`, `IController.IFunction`, `FileWithPath`, `ILoadbalancer`, `TransactionDetail`, `PluginConfig`, `ScrollToService`, `LoginUriApi`, `InternalLabConfiguration`, `ENDAttributeValue`, `NavigatorDelegate`, `AgencyApiResponse`, `IUi`, `ISegSpan`, `CompareAtom`, `PolygonFadingParameters`, `protos.common.SignaturePolicy`, `LogState`, `MarketCreatedInfo`, `InputEventKey`, `FeeAmount`, `BitmapDrawable`, `CommitInfo`, `MetadataCache`, `DataDefinition`, `PlacementConstraint`, `ProtocolFile`, `IEventInfo`, `ISnapshotContents`, `BaseDocumentView`, `TokenValue`, `CSSToken`, `UnionOf`, `ReactInstance`, `ChainStore`, `DebugConfiguration`, `PaneOptions`, `RRTypeWindow`, `UntypedProduct`, `RunnableTask`, `IsLocalScreenSharingActiveChangedListener`, `HyperModelingDecorator`, `SpacedRepetitionSettingsDelegate`, `IDData`, `Path4`, `ChildrenType`, `OpenEditorNode`, `Android`, `Count`, `HttpClientConfig`, `SubType`, `InstanceData`, `ScriptParsedEvent`, `TileGrid`, `PluginDependency`, `SessionStorageService`, `DeleteUtterancesCommandInput`, `XFilter`, `DeauthenticationResult`, `DiagnosticSeverityOverridesMap`, `TextStyleDefinition`, `VNodeStyle`, `MarketsAccount`, `TrackType`, `CreateAddLinkOptions`, `AsyncAction`, `IHand`, `KeyRingSelectablesStore`, `GX.Attr`, `FunctionAppRuntimeSettings`, `GetResultType`, `ts.ScriptKind`, `BlockContext`, `Disk`, `TETemplate`, `NetworkStatusEvent`, `SystemManager`, `Subscribers`, `tEthereumAddress`, `StackReference`, `SchemeObjectsByLayers`, `ParsedIniData`, `Padawan`, `TiledProperty`, `BindingOrAssignmentPattern`, `InstanceOptions`, `ClientRegistry`, `MachineContext`, `ListenerEntry`, `Typed`, `PurchaseOfferingCommandInput`, `StoredChannel`, `MosString128`, `SBDraft2CommandInputParameterModel`, `RoleTuple`, `InstantiatedContractTreeItem`, `AutocompleteRenderInputParams`, `LineTypes.MessageOptions`, `GoogleAppsScript.Spreadsheet.Sheet`, `ILoadbalance`, `GraphQLClient`, `OutPacketBase`, `TransformOutput`, `DataRepository`, `React.FocusEventHandler`, `LineStyle`, `FaunaNumber`, `JSystemFileReaderHelper`, `ApplicationStatus`, `CreateInputCommandInput`, `RecordedDisplayData`, `MROpts`, `IPointCloudTreeNode`, `TocService`, `ZonesManagerProps`, `StateStorageEngine`, `SackChunk`, `ResourceLocation`, `RouteEntry`, `Resizable`, `Stacks`, `SagaGeneratorWithReturn`, `Expand`, `UpdateOneOptions`, `_this`, `ScreenConfigWithParent`, `NodeWithPosition`, `ToJsonOutput`, `ICalendarEvent`, `FooId`, `IPos`, `TcpPacket`, `KeyAction`, `React.PointerEvent`, `TheiaURI`, `SelectorSpec`, `requests.ListRecommendationsRequest`, `LessOptions`, `ScreenDto`, `MetaState`, `NoteType`, `Transfer`, `ModelSchema`, `ClassInterpreter`, `UnaryExpression`, `XYZAnyValues`, `GenericRetryStrategyOptions`, `STPSetupIntent`, `FakeImporter`, `Monorepo`, `DragDropData`, `Protocol.Network.ResponseReceivedExtraInfoEvent`, `ChapterRow`, `SendRequestConfig`, `Types.RawMessage`, `_N`, `MemDown`, `RegistryRuleType`, `Comma`, `VersionCheckTTL`, `LitCallback`, `IamStatement`, `HtmlElementNode`, `IAuthHeader`, `SubEntityType`, `S3DestinationConfiguration`, `DisplayPartsSymbolWriter`, `VariableDeclaration`, `ServiceConfiguration`, `RenderSprite`, `CheckPrivileges`, `BeaconProxy`, `SyntheticEvent`, `NotificationConfiguration`, `TinyPgParams`, `Key4`, `QuerySort`, `SyncedBackupModel`, `IRGBColor`, `Spectator`, `RangeImpl`, `Cobranca`, `Fiddle`, `Effects`, `Config.ProjectConfig`, `ApproxResult`, `DescribeDatasetResponse`, `W`, `PointCloudOctreeGeometry`, `XmlTimestampsCommandInput`, `_ChildType`, `EditCategoryDto`, `NetworkModel`, `KibanaFeature`, `HasuraModuleConfig`, `IndexPatternsService`, `TestVisitor`, `Controlled`, `TagState`, `IOperation`, `Kind`, `PingProbeProtocol`, `SentryCli`, `ProjectConfigChangedEvent`, `AbstractSyntaxTree`, `Prisma`, `RoutingState`, `AutocompleteContext`, `BlockDoc`, `ISimpleConfigFile`, `SqlBuilder`, `album`, `IDashboard`, `KeyValuePair`, `ChildAppFinalConfig`, `SearchCommandInput`, `SearchSessionsMgmtAPI`, `IChangelog`, `Codeblock`, `DBTProjectContainer`, `TimeSheetService`, `PropertyPair`, `ISpawnOptions`, `Calculator`, `FormatTimeInWordsPipe`, `ConditionalExpression`, `AxesProps`, `IMatcher`, `QueryBinder`, `GameContent`, `CollateralizerContract`, `M.Middleware`, `Images.Dimensions`, `OpenGraph`, `LiveList`, `RealtimeController`, `EventAdapter`, `YAMLDocument`, `TemplateLiteral`, `ControlService`, `Firebase`, `CustomFormGroup`, `IEdgeAD`, `ContextErrorMessageProps`, `EventContext`, `ScriptingDefinitionStub`, `DMMF.Document`, `References`, `GestureDelegate`, `DebugProtocol.ContinueArguments`, `NDframe`, `Evaluated`, `IWarehouse`, `SafetyNetConfig`, `IMeasurementEvent`, `SystemUserApi`, `DAL.KEY_W`, `ParamType`, `DAL.DEVICE_ID_TOUCH_SENSOR`, `EntityData`, `AudioParam`, `IEncoderModel`, `IHSL`, `ParseIconsOpts`, `StorageTier`, `TypeScriptEmbeddedSource`, `OutputSchemaField`, `DebugProtocol.SetBreakpointsResponse`, `JumpyWidget`, `JSONRPC`, `Self`, `WebSocketLink`, `AlternateSymbolNameMap`, `Route53`, `GherkinType`, `NetworkLoadBalancer`, `GenericCompressor`, `JsonDocsUsage`, `ArenaNodeInline`, `estypes.MgetResponseItem`, `RunSpec`, `PortfolioOverviewView`, `DriveItem`, `VirtualNetworkPeering`, `IRequestHandler`, `ColumnSeries`, `StopPipelineExecutionCommandInput`, `SimNode`, `HalfBond`, `StartServicesAccessor`, `TReferences`, `ParsedPath`, `PickerColumn`, `UploadRequest`, `SimpleChoiceGameState`, `ConfigurationParams`, `Address4`, `Swagger2Schema`, `DomainsListOptionalParams`, `DokiSticker`, `ReducerHandler`, `SyncValue`, `CallEndReason`, `TokenFetcher`, `DescribeClustersCommandInput`, `MetaService`, `core.LifecycleSettings`, `UICommand`, `PedersenParams`, `CreateThemeCommandInput`, `TypeCache`, `ValidateFilterKueryNode`, `TransactionsResponse`, `QueryContext`, `SidebarTitleProps`, `RippleGlobalOptions`, `HasUniqueIdentifier`, `IEcsServerGroupCommandResult`, `AppFileStatus`, `IImport`, `ts.TryStatement`, `PrimedCase`, `DejaTreeListComponent`, `Selected`, `Geom`, `MemoryStore`, `ComponentCompilerMethod`, `OasSchema`, `SnakePlayer`, `AsyncOptions`, `AreaState`, `EnforceNonEmptyRecord`, `ResumeData`, `MOscPulse`, `VillainService`, `InstallOptions`, `UserInfoStore`, `LineColPos`, `PointSeries`, `Framebuffer2D`, `SearchEmbeddableFactory`, `SetupObjects`, `View.Mail`, `SVGImageElement`, `IconifyIconName`, `TVector`, `FunctionDef`, `JsonSourceFile`, `RemoteParticipantState`, `IResolveWebpackConfigOptions`, `WebRtcTransport`, `FeatureCollection`, `GeneratedReport`, `TypeTarget`, `UIPageViewController`, `DataTypesInput`, `Electron.Menu`, `SpaceMembershipProps`, `TDDraw`, `TotemFile`, `NodeTypeMetricCapacity`, `MThumbnail`, `PlanetComponentRef`, `HttpResponseOptions`, `VariableDeclarationContext`, `android.webkit.WebView`, `NoShrinkArray`, `ContainerClient`, `InitialValues`, `Iterator`, `RouteLocationNormalizedLoaded`, `MapBrowserEvent`, `ICredentials`, `IStaticWebAppWizardContext`, `CrochetActivation`, `d.CompilerEventName`, `TestDataObject`, `Handles`, `RotationManager`, `IndexKey`, `AclEntry`, `SpecDefinitionsService`, `AuthPluginPackage`, `DockerOptions`, `Keybinding`, `P10`, `QueryBodyType`, `PoolClientState`, `Fr`, `ToolsService`, `TsAutocompleteComponent`, `ContinuousDomainFocus`, `UploadTaskSnapshot`, `GitError`, `BuildParams`, `StringWriter`, `ISafeFont`, `AMap.Map`, `ColorMap`, `DfsResult`, `ParsedLock`, `AccountPagination`, `angu.Value`, `DragulaService`, `IKEffector`, `ClusterExplorerNode`, `SchemaFactory`, `TerraformVars`, `ProgramState`, `CollaboratorService`, `ICredentialDataDecryptedObject`, `PanelConfigProps`, `OrgDataSource`, `Matrix33`, `ImplicationProofItem`, `Edge`, `FieldsTree`, `TextDecoder`, `UrlSerializer`, `IDinoRequestEndProps`, `TextAreaProps`, `OpenSearchDashboardsDatatableColumnMeta`, `HdBitcoinCashPayments`, `ICanvasRenderingContext`, `knex`, `DataProvider`, `StringScannerOutput`, `CallbackResult`, `vscode.TextDocumentChangeEvent`, `ScriptVM`, `ValueScopeName`, `NamedBinding`, `IOdspTokens`, `Template`, `EVENT`, `ResolvedConceptAtomTypeEntry`, `IExtensionPlugin`, `FileUploader`, `LoopBackAuth`, `SharedModel`, `NamedVariableMap`, `XTermColorTheme`, `HumidityControlSetpointType`, `ApiAdapter`, `ConfigManager`, `TLinkCallback`, `RenderOption`, `PagedAsyncIterableIterator`, `PaginatedTiles`, `PublicVocabulary`, `Projection`, `DayPlannerSettings`, `StacksMessageType`, `DefaultChangeAnalyzer`, `GroupsService`, `NamedFragments`, `Divider`, `SocialTokenV0`, `requests.ListResourceTypesRequest`, `Models.AccessTier`, `ExtensionContext`, `ObjectSchema`, `SonarQubeApiScm`, `BUNDLE_TYPE`, `commandInterface`, `GridEntry`, `Datetime`, `QListWidgetItem`, `EncryptedWalletsStore`, `DirectiveDef`, `TMenuOption`, `AutoScalingPolicy`, `Witness`, `DebugStateLegend`, `WalkContext`, `AuthenticationState`, `VariableType`, `IQuestionToolboxItem`, `GraphRequest`, `PopupPositionConfig`, `ExtendedOptions`, `Interpret`, `EnqueuedTask`, `ChatAdapter`, `ts.NamedDeclaration`, `Percussion`, `SortEnd`, `MetadataKey`, `IEntityOptions`, `SyncModule`, `SortOrderType`, `FakeSurveyDialog`, `CreateApplicationVersionCommandInput`, `ConnectionGroup`, `DeleteBucketCommandInput`, `CspConfigType`, `EvolvingArrayType`, `ApiResultCallback`, `backend_util.Conv3DInfo`, `ThySlideContainerComponent`, `PublishCommandInput`, `IssueType`, `requests.ListVmClusterNetworksRequest`, `yubo.MessageService`, `FilePathKey`, `ServiceState`, `P2PRequest`, `DescribeClustersResponse`, `TCountData`, `LiteralNode`, `FileOpItem`, `V1Scale`, `DataOptions`, `ChangeTracker`, `ChartOptions`, `PlaylistWithLoadingState`, `EnumValue`, `ModalRef`, `AppNode`, `SuiTabHeader`, `OnceTask`, `NEOONEDataProvider`, `OrderStatusState`, `SelectItemValue`, `SafeString`, `TestChangesetSequence`, `ServerObject`, `OPaths`, `Ganache`, `estypes.QueryDslQueryContainer`, `BBOX`, `AndExpression`, `SiteConfigResource`, `UICollectionView`, `ExceptionListClient`, `AllDestinations`, `App.ui.INotifications`, `EvaluateMid`, `SendData`, `Discord.Client`, `CryptoProvider`, `CodeSpec`, `GossipError`, `GeneralCallbackResult`, `TickPositionsArray`, `SettingsV11`, `AstRoot`, `LonLatArray`, `GetConfigFn`, `RecipientElement`, `ThingsPage`, `RenderParams`, `IgnoreDiagnosticResult`, `Package.Package`, `requests.ListInstanceAgentCommandsRequest`, `EmojiParseOptions`, `ControllerOptions`, `DomainInfo`, `ParameterReflection`, `ClientIdentity`, `LURLGroup`, `ProjectType`, `FormattedEntry`, `IProductTranslatable`, `WaveShaper`, `UserEntity`, `Specifier`, `BBoxObject`, `MeshBasicMaterial`, `PluginOption`, `SummaryObject`, `ILoginState`, `ISubscriptionContext`, `TConfig`, `ICellMarker`, `MiLayerData`, `nodes.RuleSet`, `GeneralSettings`, `androidx.fragment.app.Fragment`, `PrettierConfig`, `ServiceName`, `RootType`, `ArmSaveConfigs`, `ts.FunctionDeclaration`, `OnTabSelectedlistener`, `AnyChildren`, `MatchFilter`, `V1Job`, `planner.Planner`, `MessageReadListener`, `TimestampInMillis`, `IExportMapMetadata`, `ComponentEvent`, `IMergeBlock`, `BookData`, `ListingData`, `AppwriteProjectConfiguration`, `ExpressConnection`, `InputLayerArgs`, `ProductVariantPriceService`, `MessageListener`, `HardRedirectService`, `AfterGenesisBlockApplyContext`, `LuaParse`, `ArrayBuilderSegment`, `BlitzPage`, `PathConfigMap`, `Concourse`, `NitroState`, `WaveProperties`, `ListHealthChecksVantagePointsRequest`, `ShapeStyle`, `InjectContext`, `tfl.SymbolicTensor`, `MappingObject`, `RouteName`, `Restriction`, `DescribeEngineDefaultClusterParametersCommandInput`, `SagaReturnType`, `DoubleLinkedListNode`, `TypedArray`, `commonmark.Node`, `HttpConnection`, `SpineAnimation`, `TraverseCallbackType`, `Shape2DSW`, `VConsoleNetworkRequestItem`, `TypeDictionaryInfo`, `ReferencingColumnBuilder`, `DAVAccount`, `AnnotationOptions`, `SummaryData`, `GroupByPipe`, `GetWebhookParams`, `Contactable`, `KdNode`, `SubmissionCcLicence`, `OnboardingPage`, `ISignalMessage`, `AnnotationLevel`, `Recipient`, `SpyData`, `Buffers`, `CanvasGraphic`, `TensorInfo`, `ToneOscillatorNode`, `LinkData`, `paper.Path`, `PLSQLCompletionDefinition`, `ModuleOptionsWithValidateFalse`, `DeregisterInstanceCommandInput`, `MdcSnackbarRef`, `NetworkDiagnosticChangedEventArgs`, `TRequestWithUser`, `PullFromStorageInfo`, `Cumulative`, `ItemRenderer`, `SyntheticPerformanceBudget`, `LuxonDateTime`, `IMatchWarriorResult`, `Bone`, `ComponentConfig`, `AxisEdge`, `MigrationData`, `Real`, `TokenCategory`, `TmdbTvResult`, `ReactQueryMethodMap`, `RootToken`, `ComponentController`, `EAdvancedSortMethod`, `DeviceManifest`, `apid.ReserveSaveOption`, `UpdateProjectRequest`, `DeviceProps`, `OcsHttpError`, `SecurityQuestionStore`, `DescriptorValue`, `JoinPoint`, `NzUploadFile`, `ScanPaginator`, `ScriptStub`, `MacroTask`, `Token.Token`, `React.VFC`, `BotResponseService`, `ThemeData`, `ILoggerColorParams`, `NoArgListener`, `EnvPaths`, `BundleResult`, `DMMF.Model`, `CastNode`, `RequestBodyParser`, `SwimLane`, `RTCPFB`, `CoinMap`, `TypescriptMember`, `FullConfiguration`, `RuleWithFlags`, `UIEvent`, `Exec`, `DailyRotateFile`, `ForeignKeyModelInterface`, `SenseEditor`, `IndexProperty`, `IExecOptions`, `IndividualChange`, `SagaConfig`, `DomainEntry`, `Transcoder`, `RarityLevel`, `PermissionData`, `ConstructorOrField`, `PlaceholderContent`, `TEmitted`, `KeyLike`, `JSXNode`, `SimpleRange`, `StringDict`, `ExternalAuthenticateModel`, `Fence`, `SortDirectionNumeric`, `RegisteredServiceAttributeReleasePolicy`, `ConfirmDialog`, `PlanningResult`, `ContentLocation`, `ImmutableSelectorNode`, `FeatherProps`, `DocInfo`, `GXMaterialBuilder`, `eventWithTime`, `TableNode`, `FbFormPermission`, `ChartType`, `CacheQueryOptions`, `EvaluationStats`, `DiffError`, `ClippingPlane`, `OrderByItemNode`, `id`, `DeleteTagsCommandInput`, `ClassTypeFlags`, `AsyncGenerator`, `PersianDate`, `ClientId`, `Proto.FileLocationRequestArgs`, `Config.IConfig`, `SearchCallback`, `Draft.EditorState`, `RejectOnNotFound`, `RemoteEngine`, `BinarySensorCCReport`, `SharedStreetsReference`, `UrlGeneratorInternal`, `HTTPRequest`, `FullType`, `StatsCollector`, `TimeRange`, `TouchEventHandler`, `_.Iso`, `EnvsRaw`, `TaskInput`, `SupportedExchange`, `AttentionLevel`, `PointSet`, `ParseField`, `ResourceHash`, `TransmartNegationConstraint`, `Player`, `LocalGatewayTreeItem`, `GamepadEvent`, `UsersEntity`, `RowRenderTreeType`, `EndpointOperationCommandInput`, `SetTree`, `Labels`, `ITriggerEvent`, `SystemMessageProps`, `ListParams`, `Localization`, `TreemapPoint`, `SendEmailJsonDto`, `ReadonlySet`, `NodeDetails`, `FileAvailability`, `ParsingState`, `esbuild.BuildOptions`, `StyledLabelProps`, `Model1`, `ImageMatrix`, `xyTYpe`, `StyleScope`, `JSDocTypedefTag`, `TranslationBundle`, `UpdateDatabaseResponse`, `STPCardBrand`, `StudentFeedback`, `d.CompilerRequestResponse`, `TrackModel`, `CountableExpectation`, `PlayerProps`, `PathVal`, `BuffData`, `Lab`, `ListPartsCommandInput`, `IJetURL`, `TransactionReceiptsEventInfo`, `HSD_Archive`, `PickingCollisionVO`, `SignatureResult`, `Fn0`, `OpenYoloWithTimeoutApi`, `HttpTestingController`, `ServerEngine`, `IValidationResult`, `roleMenuInterface`, `DebugSessionCustomEvent`, `WrapperArray`, `MDCTextFieldInputAdapter`, `Hull`, `EmbedOptions`, `requests.ListExternalContainerDatabasesRequest`, `TransactionOpts`, `MimeParserNode`, `SiteData`, `DocumentUnderstandingServiceClient`, `Filename`, `AbiRange`, `ReduxActionWithPayload`, `MDCListAdapter`, `IWizard`, `FormHook`, `Event_PropertiesEntry`, `DiagramState`, `SegmentedBarItem`, `DashboardSavedObject`, `GLclampf`, `ITextProps`, `ITx`, `AuthenticationVirtualMachine`, `LogicalWhereExpr`, `guildDoc`, `StyleSetEvaluator`, `NSError`, `SpritesStateRecord`, `DefaultGuiState`, `ChangedElementsDb`, `YamlNode`, `MetamaskPolkadotSnap`, `PositionConfig`, `Tester`, `FileSystemProvider`, `UnitsProvider`, `IContextualMenuItemStyles`, `AVRPortConfig`, `DescribeDomainCommandInput`, `ResultList`, `OpenAPI.PathItem`, `AutomationHelper`, `TypeChecker`, `AuthFacade`, `IPageChangeEvent`, `ReadableStreamReader`, `InputSearchExpressionGroup`, `HandlerInfo`, `CeloTxObject`, `RelationshipService`, `IAutocompleteSelectCellEditorParameters`, `ActionQueue`, `MigrateDeploy`, `cc.Event.EventTouch`, `ConstructorParameters`, `NodeContentTree`, `HelpObj`, `ElementWrapper`, `TxOut`, `DubboTcpTransport`, `SpriteComponent`, `TestExecutionContext`, `MutationContext`, `DeployArgs`, `TabularData`, `CspDirectives`, `ExpressionValueError`, `LabelProps`, `UserMetadatumModel`, `MeasureUnitType`, `SegNode`, `EntityStateRecord`, `DatamodelEnum`, `ContentWidget`, `ResourceData`, `WhereClauseContext`, `Optimizer2`, `BatchCertificateTransfer`, `NgPackagrBuilderOptions`, `EncodedManagedModel`, `SQLStatement`, `LiveAtlasPlayer`, `BasicAction`, `XmlSchema`, `Spec`, `Translation`, `GPUProgram`, `SafeBlock`, `InternalStore`, `SimulationInfo`, `PrefixUnaryOperator`, `BrushScope`, `JavaRecord`, `SelectedCriteriaType`, `ResolveSubscriptionFn`, `SpawnFlags`, `ClassWeight`, `AudioVideoFacade`, `PSTDescriptorItem`, `RelayerRequest`, `TileLoader`, `ListClustersRequest`, `InitialStylingValues`, `ResourceLoader`, `CmsStorageEntry`, `GleeMessage`, `Slate`, `sdk.PushAudioInputStream`, `ProcessingContext`, `ESLintProgram`, `SlideLayout`, `GridView`, `LogRecord`, `RTCDataChannelParameters`, `ICSSInJSStyle`, `d.FsWriteResults`, `Ports`, `RemoteConsole`, `LineAndCharacter`, `DiagnosticRelatedInformation`, `StepperContext`, `AlertDescriptionProps`, `OrganizationalUnitPath`, `ThemeResolver`, `CSSBlocksJSXAnalyzer`, `IOperator`, `ElementEntity`, `Refresher`, `CurrentState`, `DependOnFileCondition`, `GetSuccess`, `CopyAuthOptions`, `IMessageListenerWrapper`, `WebdriverIOConfig`, `ExpressionAstNode`, `GeneralOptions`, `RectGraphicsOptions`, `UpdateProfileParams`, `TimeValues`, `HeaderData`, `IPackageInfo`, `PathAndExtension`, `UseMediaState`, `FieldDescription`, `MalNode`, `BroadcastChannel`, `Word`, `DeploymentHandler`, `TreemapSeriesType`, `IUtilityStoreState`, `TsInputComponent`, `PortModel`, `AnnotationVisitor`, `IFilterListRow`, `github.GitHub`, `SimpleTextSymbol`, `android.os.Parcelable`, `Attribute`, `FileShare`, `WalletLinkRelayAbstract`, `IHillResult`, `SchedulerLike`, `OnPreRoutingToolkit`, `DocumentationContext`, `TransferMode`, `MonitoringData`, `AuthClientRepository`, `RequestOpts`, `RemoveTagsFromResourceCommand`, `SetCombinationType`, `PaletteOutput`, `TypePoint`, `FormSubmissionErrors`, `ICellx`, `Events.activate`, `CSharpClass`, `UpdateBillingParams`, `freedom.RTCPeerConnection.RTCConfiguration`, `AuthCode`, `IConvertContext`, `SVGTextElement`, `PipeConnection`, `PaymentParams`, `AstNodeContent`, `PhotoSize`, `BindingFilter`, `L.LatLng`, `EnhancedReducerResult`, `FeederDetails`, `ISourceFileReference`, `ExportSpecifier`, `BlogPost`, `CountStatisticSummary`, `SeasonRequest`, `IBinding`, `TagTree`, `ParameterValueList`, `LanguageState`, `ActionReturn`, `ExtendedPostFrontMatter`, `ResetPasswordInput`, `CmsModelPlugin`, `Publication`, `ApolloLink`, `MockAttr`, `GPUBufferUsageFlags`, `NotificationIOS`, `files.Location`, `RequestUploadService`, `RoleMapping`, `AcceptableType`, `ResolutionKindSpecificLoader`, `DokiThemeConfig`, `PerModuleNameCache`, `SnapshotListParams`, `IListInfo`, `BackblazeB2Bucket`, `PackageManifest`, `types.Output`, `ThreadConnection`, `WindowsManager`, `theia.WorkspaceFolder`, `InlinableCode`, `IMOSStoryAction`, `Framework`, `ThyOverlayTrigger`, `PlanItem` + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_ner_polygot_MT4TS_en_5.5.0_3.0_1725915633586.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_ner_polygot_MT4TS_en_5.5.0_3.0_1725915633586.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDetector = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx")\ + .setInputCols(["document"])\ + .setOutputCol("sentence") + +tokenizer = Tokenizer() \ + .setInputCols("sentence") \ + .setOutputCol("token") + +tokenClassifier = BertForTokenClassification.pretrained("roberta_ner_polygot_MT4TS","en") \ + .setInputCols(["sentence", "token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline(stages=[documentAssembler, sentenceDetector, tokenizer, tokenClassifier]) + +data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val sentenceDetector = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val tokenizer = new Tokenizer() + .setInputCols(Array("sentence")) + .setOutputCol("token") + +val tokenClassifier = BertForTokenClassification.pretrained("roberta_ner_polygot_MT4TS","en") + .setInputCols(Array("sentence", "token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler,sentenceDetector, tokenizer, tokenClassifier)) + +val data = Seq("PUT YOUR STRING HERE").toDF("text") + +val result = pipeline.fit(data).transform(data) +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_ner_polygot_MT4TS| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|607.3 MB| + +## References + +References + +- https://huggingface.co/kevinjesse/polygot-MT4TS \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_ner_polygot_MT4TS_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_ner_polygot_MT4TS_pipeline_en.md new file mode 100644 index 00000000000000..ed1ecbd6fe4965 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_ner_polygot_MT4TS_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_ner_polygot_MT4TS_pipeline pipeline RoBertaForTokenClassification from kevinjesse +author: John Snow Labs +name: roberta_ner_polygot_MT4TS_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_ner_polygot_MT4TS_pipeline` is a English model originally trained by kevinjesse. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_ner_polygot_MT4TS_pipeline_en_5.5.0_3.0_1725915663907.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_ner_polygot_MT4TS_pipeline_en_5.5.0_3.0_1725915663907.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_ner_polygot_MT4TS_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_ner_polygot_MT4TS_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_ner_polygot_MT4TS_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|607.3 MB| + +## References + +https://huggingface.co/kevinjesse/polygot-MT4TS + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_en.md new file mode 100644 index 00000000000000..a00b1247a13eb0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_en.md @@ -0,0 +1,112 @@ +--- +layout: model +title: English RobertaForTokenClassification Large Cased model (from tner) +author: John Snow Labs +name: roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous +date: 2024-09-09 +tags: [bert, ner, open_source, en, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RobertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `roberta-large-tweetner-2020-selflabel2021-continuous` is a English model originally trained by `tner`. + +## Predicted Entities + +`group`, `creative_work`, `person`, `event`, `corporation`, `location`, `product` + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_en_5.5.0_3.0_1725916025192.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_en_5.5.0_3.0_1725916025192.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDetector = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx")\ + .setInputCols(["document"])\ + .setOutputCol("sentence") + +tokenizer = Tokenizer() \ + .setInputCols("sentence") \ + .setOutputCol("token") + +tokenClassifier = BertForTokenClassification.pretrained("roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous","en") \ + .setInputCols(["sentence", "token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline(stages=[documentAssembler, sentenceDetector, tokenizer, tokenClassifier]) + +data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val sentenceDetector = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val tokenizer = new Tokenizer() + .setInputCols(Array("sentence")) + .setOutputCol("token") + +val tokenClassifier = BertForTokenClassification.pretrained("roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous","en") + .setInputCols(Array("sentence", "token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler,sentenceDetector, tokenizer, tokenClassifier)) + +val data = Seq("PUT YOUR STRING HERE").toDF("text") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("en.ner.roberta.tweet.tweetner_2020_selflabel2021_continuous.large.by_tner").predict("""PUT YOUR STRING HERE""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|1.3 GB| + +## References + +References + +- https://huggingface.co/tner/roberta-large-tweetner-2020-selflabel2021-continuous \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_pipeline_en.md new file mode 100644 index 00000000000000..40d208b63ec21f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_pipeline pipeline RoBertaForTokenClassification from tner +author: John Snow Labs +name: roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_pipeline` is a English model originally trained by tner. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_pipeline_en_5.5.0_3.0_1725916088437.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_pipeline_en_5.5.0_3.0_1725916088437.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_ner_roberta_large_tweetner_2020_selflabel2021_continuous_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/tner/roberta-large-tweetner-2020-selflabel2021-continuous + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_qa_fpdm_soup_model_squad2.0_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_qa_fpdm_soup_model_squad2.0_pipeline_en.md new file mode 100644 index 00000000000000..cf4ac2aaa3374a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_qa_fpdm_soup_model_squad2.0_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English roberta_qa_fpdm_soup_model_squad2.0_pipeline pipeline RoBertaForQuestionAnswering from AnonymousSub +author: John Snow Labs +name: roberta_qa_fpdm_soup_model_squad2.0_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_qa_fpdm_soup_model_squad2.0_pipeline` is a English model originally trained by AnonymousSub. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_qa_fpdm_soup_model_squad2.0_pipeline_en_5.5.0_3.0_1725866904694.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_qa_fpdm_soup_model_squad2.0_pipeline_en_5.5.0_3.0_1725866904694.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_qa_fpdm_soup_model_squad2.0_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_qa_fpdm_soup_model_squad2.0_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_qa_fpdm_soup_model_squad2.0_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|459.7 MB| + +## References + +https://huggingface.co/AnonymousSub/fpdm_roberta_soup_model_squad2.0 + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-roberta_squad_finetuned_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-roberta_squad_finetuned_pipeline_en.md new file mode 100644 index 00000000000000..764ef9715ede5b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-roberta_squad_finetuned_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English roberta_squad_finetuned_pipeline pipeline RoBertaForQuestionAnswering from mylas02 +author: John Snow Labs +name: roberta_squad_finetuned_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_squad_finetuned_pipeline` is a English model originally trained by mylas02. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_squad_finetuned_pipeline_en_5.5.0_3.0_1725876388311.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_squad_finetuned_pipeline_en_5.5.0_3.0_1725876388311.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_squad_finetuned_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_squad_finetuned_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_squad_finetuned_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|461.9 MB| + +## References + +https://huggingface.co/mylas02/Roberta_SQuaD_FineTuned + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-robertamodel_en.md b/docs/_posts/ahmedlone127/2024-09-09-robertamodel_en.md new file mode 100644 index 00000000000000..039b5c9c0b95c1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-robertamodel_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English robertamodel RoBertaForSequenceClassification from Yunij +author: John Snow Labs +name: robertamodel +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`robertamodel` is a English model originally trained by Yunij. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/robertamodel_en_5.5.0_3.0_1725920368580.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/robertamodel_en_5.5.0_3.0_1725920368580.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("robertamodel","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("robertamodel", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|robertamodel| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|464.9 MB| + +## References + +https://huggingface.co/Yunij/RobertaModel \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-routing_module_action_question_conversation_move_hack_debertav3_cls_en.md b/docs/_posts/ahmedlone127/2024-09-09-routing_module_action_question_conversation_move_hack_debertav3_cls_en.md new file mode 100644 index 00000000000000..1fdf903fd957a4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-routing_module_action_question_conversation_move_hack_debertav3_cls_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English routing_module_action_question_conversation_move_hack_debertav3_cls DeBertaForSequenceClassification from Raffix +author: John Snow Labs +name: routing_module_action_question_conversation_move_hack_debertav3_cls +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, deberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`routing_module_action_question_conversation_move_hack_debertav3_cls` is a English model originally trained by Raffix. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/routing_module_action_question_conversation_move_hack_debertav3_cls_en_5.5.0_3.0_1725858771463.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/routing_module_action_question_conversation_move_hack_debertav3_cls_en_5.5.0_3.0_1725858771463.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DeBertaForSequenceClassification.pretrained("routing_module_action_question_conversation_move_hack_debertav3_cls","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DeBertaForSequenceClassification.pretrained("routing_module_action_question_conversation_move_hack_debertav3_cls", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|routing_module_action_question_conversation_move_hack_debertav3_cls| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|574.3 MB| + +## References + +https://huggingface.co/Raffix/routing_module_action_question_conversation_move_hack_debertav3_cls \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-routing_module_action_question_conversation_move_hack_debertav3_cls_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-routing_module_action_question_conversation_move_hack_debertav3_cls_pipeline_en.md new file mode 100644 index 00000000000000..f280af021a89d2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-routing_module_action_question_conversation_move_hack_debertav3_cls_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English routing_module_action_question_conversation_move_hack_debertav3_cls_pipeline pipeline DeBertaForSequenceClassification from Raffix +author: John Snow Labs +name: routing_module_action_question_conversation_move_hack_debertav3_cls_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`routing_module_action_question_conversation_move_hack_debertav3_cls_pipeline` is a English model originally trained by Raffix. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/routing_module_action_question_conversation_move_hack_debertav3_cls_pipeline_en_5.5.0_3.0_1725858830207.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/routing_module_action_question_conversation_move_hack_debertav3_cls_pipeline_en_5.5.0_3.0_1725858830207.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("routing_module_action_question_conversation_move_hack_debertav3_cls_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("routing_module_action_question_conversation_move_hack_debertav3_cls_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|routing_module_action_question_conversation_move_hack_debertav3_cls_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|574.3 MB| + +## References + +https://huggingface.co/Raffix/routing_module_action_question_conversation_move_hack_debertav3_cls + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-rubert_large_squad_en.md b/docs/_posts/ahmedlone127/2024-09-09-rubert_large_squad_en.md new file mode 100644 index 00000000000000..0de54a8d67dc26 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-rubert_large_squad_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English rubert_large_squad BertForQuestionAnswering from Den4ikAI +author: John Snow Labs +name: rubert_large_squad +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, bert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`rubert_large_squad` is a English model originally trained by Den4ikAI. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/rubert_large_squad_en_5.5.0_3.0_1725858303613.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/rubert_large_squad_en_5.5.0_3.0_1725858303613.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = BertForQuestionAnswering.pretrained("rubert_large_squad","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = BertForQuestionAnswering.pretrained("rubert_large_squad", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|rubert_large_squad| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|667.1 MB| + +## References + +https://huggingface.co/Den4ikAI/rubert-large-squad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-rulebert_v0_4_k2_it.md b/docs/_posts/ahmedlone127/2024-09-09-rulebert_v0_4_k2_it.md new file mode 100644 index 00000000000000..5dfa435373342f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-rulebert_v0_4_k2_it.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Italian rulebert_v0_4_k2 XlmRoBertaForSequenceClassification from ribesstefano +author: John Snow Labs +name: rulebert_v0_4_k2 +date: 2024-09-09 +tags: [it, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: it +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`rulebert_v0_4_k2` is a Italian model originally trained by ribesstefano. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/rulebert_v0_4_k2_it_5.5.0_3.0_1725906573514.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/rulebert_v0_4_k2_it_5.5.0_3.0_1725906573514.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("rulebert_v0_4_k2","it") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("rulebert_v0_4_k2", "it") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|rulebert_v0_4_k2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|it| +|Size:|870.4 MB| + +## References + +https://huggingface.co/ribesstefano/RuleBert-v0.4-k2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-ruroberta_distilled_pipeline_ru.md b/docs/_posts/ahmedlone127/2024-09-09-ruroberta_distilled_pipeline_ru.md new file mode 100644 index 00000000000000..bc7ab0fe08fd49 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-ruroberta_distilled_pipeline_ru.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Russian ruroberta_distilled_pipeline pipeline RoBertaEmbeddings from d0rj +author: John Snow Labs +name: ruroberta_distilled_pipeline +date: 2024-09-09 +tags: [ru, open_source, pipeline, onnx] +task: Embeddings +language: ru +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ruroberta_distilled_pipeline` is a Russian model originally trained by d0rj. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ruroberta_distilled_pipeline_ru_5.5.0_3.0_1725910188761.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ruroberta_distilled_pipeline_ru_5.5.0_3.0_1725910188761.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("ruroberta_distilled_pipeline", lang = "ru") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("ruroberta_distilled_pipeline", lang = "ru") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ruroberta_distilled_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|ru| +|Size:|432.0 MB| + +## References + +https://huggingface.co/d0rj/ruRoberta-distilled + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-ruroberta_distilled_ru.md b/docs/_posts/ahmedlone127/2024-09-09-ruroberta_distilled_ru.md new file mode 100644 index 00000000000000..b6b4d7ebcd36b9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-ruroberta_distilled_ru.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Russian ruroberta_distilled RoBertaEmbeddings from d0rj +author: John Snow Labs +name: ruroberta_distilled +date: 2024-09-09 +tags: [ru, open_source, onnx, embeddings, roberta] +task: Embeddings +language: ru +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ruroberta_distilled` is a Russian model originally trained by d0rj. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ruroberta_distilled_ru_5.5.0_3.0_1725910167592.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ruroberta_distilled_ru_5.5.0_3.0_1725910167592.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("ruroberta_distilled","ru") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("ruroberta_distilled","ru") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ruroberta_distilled| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|ru| +|Size:|432.0 MB| + +## References + +https://huggingface.co/d0rj/ruRoberta-distilled \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_en.md b/docs/_posts/ahmedlone127/2024-09-09-sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_en.md new file mode 100644 index 00000000000000..20a1c3dd3c6260 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sanskrit_saskta_roberta_e12_w1_1_5_b16_m4 RoBertaForSequenceClassification from JerryYanJiang +author: John Snow Labs +name: sanskrit_saskta_roberta_e12_w1_1_5_b16_m4 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sanskrit_saskta_roberta_e12_w1_1_5_b16_m4` is a English model originally trained by JerryYanJiang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_en_5.5.0_3.0_1725912619570.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_en_5.5.0_3.0_1725912619570.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("sanskrit_saskta_roberta_e12_w1_1_5_b16_m4","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("sanskrit_saskta_roberta_e12_w1_1_5_b16_m4", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sanskrit_saskta_roberta_e12_w1_1_5_b16_m4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/JerryYanJiang/SA-roberta-e12-w1-1.5-b16-m4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_pipeline_en.md new file mode 100644 index 00000000000000..fa99f316ee59e2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_pipeline pipeline RoBertaForSequenceClassification from JerryYanJiang +author: John Snow Labs +name: sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_pipeline` is a English model originally trained by JerryYanJiang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_pipeline_en_5.5.0_3.0_1725912686497.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_pipeline_en_5.5.0_3.0_1725912686497.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sanskrit_saskta_roberta_e12_w1_1_5_b16_m4_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/JerryYanJiang/SA-roberta-e12-w1-1.5-b16-m4 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-sarcasm_detection_roberta_base_newdata_en.md b/docs/_posts/ahmedlone127/2024-09-09-sarcasm_detection_roberta_base_newdata_en.md new file mode 100644 index 00000000000000..23100f4debb3db --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-sarcasm_detection_roberta_base_newdata_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sarcasm_detection_roberta_base_newdata RoBertaForSequenceClassification from jkhan447 +author: John Snow Labs +name: sarcasm_detection_roberta_base_newdata +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sarcasm_detection_roberta_base_newdata` is a English model originally trained by jkhan447. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sarcasm_detection_roberta_base_newdata_en_5.5.0_3.0_1725903097217.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sarcasm_detection_roberta_base_newdata_en_5.5.0_3.0_1725903097217.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("sarcasm_detection_roberta_base_newdata","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("sarcasm_detection_roberta_base_newdata", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sarcasm_detection_roberta_base_newdata| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|467.6 MB| + +## References + +https://huggingface.co/jkhan447/sarcasm-detection-RoBerta-base-newdata \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-sarcasm_detection_roberta_base_newdata_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-sarcasm_detection_roberta_base_newdata_pipeline_en.md new file mode 100644 index 00000000000000..801e09ba0e4834 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-sarcasm_detection_roberta_base_newdata_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English sarcasm_detection_roberta_base_newdata_pipeline pipeline RoBertaForSequenceClassification from jkhan447 +author: John Snow Labs +name: sarcasm_detection_roberta_base_newdata_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sarcasm_detection_roberta_base_newdata_pipeline` is a English model originally trained by jkhan447. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sarcasm_detection_roberta_base_newdata_pipeline_en_5.5.0_3.0_1725903119127.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sarcasm_detection_roberta_base_newdata_pipeline_en_5.5.0_3.0_1725903119127.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sarcasm_detection_roberta_base_newdata_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sarcasm_detection_roberta_base_newdata_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sarcasm_detection_roberta_base_newdata_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|467.6 MB| + +## References + +https://huggingface.co/jkhan447/sarcasm-detection-RoBerta-base-newdata + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-scenario_tcr_4_data_english_cardiff_eng_only_en.md b/docs/_posts/ahmedlone127/2024-09-09-scenario_tcr_4_data_english_cardiff_eng_only_en.md new file mode 100644 index 00000000000000..a30ff3181f2f99 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-scenario_tcr_4_data_english_cardiff_eng_only_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English scenario_tcr_4_data_english_cardiff_eng_only XlmRoBertaForSequenceClassification from haryoaw +author: John Snow Labs +name: scenario_tcr_4_data_english_cardiff_eng_only +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`scenario_tcr_4_data_english_cardiff_eng_only` is a English model originally trained by haryoaw. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/scenario_tcr_4_data_english_cardiff_eng_only_en_5.5.0_3.0_1725907683962.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/scenario_tcr_4_data_english_cardiff_eng_only_en_5.5.0_3.0_1725907683962.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("scenario_tcr_4_data_english_cardiff_eng_only","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("scenario_tcr_4_data_english_cardiff_eng_only", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|scenario_tcr_4_data_english_cardiff_eng_only| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|829.9 MB| + +## References + +https://huggingface.co/haryoaw/scenario-TCR-4_data-en-cardiff_eng_only \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-sentence_acceptability_en.md b/docs/_posts/ahmedlone127/2024-09-09-sentence_acceptability_en.md new file mode 100644 index 00000000000000..ecce8a81882448 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-sentence_acceptability_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sentence_acceptability BertForSequenceClassification from EstherT +author: John Snow Labs +name: sentence_acceptability +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sentence_acceptability` is a English model originally trained by EstherT. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sentence_acceptability_en_5.5.0_3.0_1725852912514.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sentence_acceptability_en_5.5.0_3.0_1725852912514.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("sentence_acceptability","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("sentence_acceptability", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sentence_acceptability| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/EstherT/sentence-acceptability \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-sentiment_analysis_model_quophydzifa_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-sentiment_analysis_model_quophydzifa_pipeline_en.md new file mode 100644 index 00000000000000..ce8bdf2032c9bb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-sentiment_analysis_model_quophydzifa_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English sentiment_analysis_model_quophydzifa_pipeline pipeline DistilBertForSequenceClassification from QuophyDzifa +author: John Snow Labs +name: sentiment_analysis_model_quophydzifa_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sentiment_analysis_model_quophydzifa_pipeline` is a English model originally trained by QuophyDzifa. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sentiment_analysis_model_quophydzifa_pipeline_en_5.5.0_3.0_1725872892489.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sentiment_analysis_model_quophydzifa_pipeline_en_5.5.0_3.0_1725872892489.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sentiment_analysis_model_quophydzifa_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sentiment_analysis_model_quophydzifa_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sentiment_analysis_model_quophydzifa_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/QuophyDzifa/Sentiment-Analysis-Model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-sentiment_analysis_twitter_roberta_fine_tune_hashtag_removed_en.md b/docs/_posts/ahmedlone127/2024-09-09-sentiment_analysis_twitter_roberta_fine_tune_hashtag_removed_en.md new file mode 100644 index 00000000000000..44e0e86a503f1a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-sentiment_analysis_twitter_roberta_fine_tune_hashtag_removed_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sentiment_analysis_twitter_roberta_fine_tune_hashtag_removed RoBertaForSequenceClassification from technocrat3128 +author: John Snow Labs +name: sentiment_analysis_twitter_roberta_fine_tune_hashtag_removed +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sentiment_analysis_twitter_roberta_fine_tune_hashtag_removed` is a English model originally trained by technocrat3128. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sentiment_analysis_twitter_roberta_fine_tune_hashtag_removed_en_5.5.0_3.0_1725903839066.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sentiment_analysis_twitter_roberta_fine_tune_hashtag_removed_en_5.5.0_3.0_1725903839066.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("sentiment_analysis_twitter_roberta_fine_tune_hashtag_removed","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("sentiment_analysis_twitter_roberta_fine_tune_hashtag_removed", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sentiment_analysis_twitter_roberta_fine_tune_hashtag_removed| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.3 MB| + +## References + +https://huggingface.co/technocrat3128/sentiment_analysis_Twitter_roberta_fine_tune_hashtag_removed \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-sentiment_sentiment_small_random0_seed2_twitter_roberta_base_2022_154m_en.md b/docs/_posts/ahmedlone127/2024-09-09-sentiment_sentiment_small_random0_seed2_twitter_roberta_base_2022_154m_en.md new file mode 100644 index 00000000000000..6d0d9cedf5d2fa --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-sentiment_sentiment_small_random0_seed2_twitter_roberta_base_2022_154m_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sentiment_sentiment_small_random0_seed2_twitter_roberta_base_2022_154m RoBertaForSequenceClassification from tweettemposhift +author: John Snow Labs +name: sentiment_sentiment_small_random0_seed2_twitter_roberta_base_2022_154m +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sentiment_sentiment_small_random0_seed2_twitter_roberta_base_2022_154m` is a English model originally trained by tweettemposhift. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sentiment_sentiment_small_random0_seed2_twitter_roberta_base_2022_154m_en_5.5.0_3.0_1725912143899.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sentiment_sentiment_small_random0_seed2_twitter_roberta_base_2022_154m_en_5.5.0_3.0_1725912143899.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("sentiment_sentiment_small_random0_seed2_twitter_roberta_base_2022_154m","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("sentiment_sentiment_small_random0_seed2_twitter_roberta_base_2022_154m", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sentiment_sentiment_small_random0_seed2_twitter_roberta_base_2022_154m| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.1 MB| + +## References + +https://huggingface.co/tweettemposhift/sentiment-sentiment_small_random0_seed2-twitter-roberta-base-2022-154m \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-setfit_model_bhathiya_en.md b/docs/_posts/ahmedlone127/2024-09-09-setfit_model_bhathiya_en.md new file mode 100644 index 00000000000000..a190f0d5a09c31 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-setfit_model_bhathiya_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English setfit_model_bhathiya MPNetEmbeddings from Bhathiya +author: John Snow Labs +name: setfit_model_bhathiya +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`setfit_model_bhathiya` is a English model originally trained by Bhathiya. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/setfit_model_bhathiya_en_5.5.0_3.0_1725874793712.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/setfit_model_bhathiya_en_5.5.0_3.0_1725874793712.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("setfit_model_bhathiya","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("setfit_model_bhathiya","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|setfit_model_bhathiya| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/Bhathiya/setfit-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-setfit_model_independence_labelfaithful_epochs2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-setfit_model_independence_labelfaithful_epochs2_pipeline_en.md new file mode 100644 index 00000000000000..873ec61dcb37d1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-setfit_model_independence_labelfaithful_epochs2_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English setfit_model_independence_labelfaithful_epochs2_pipeline pipeline MPNetEmbeddings from mitra-mir +author: John Snow Labs +name: setfit_model_independence_labelfaithful_epochs2_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`setfit_model_independence_labelfaithful_epochs2_pipeline` is a English model originally trained by mitra-mir. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/setfit_model_independence_labelfaithful_epochs2_pipeline_en_5.5.0_3.0_1725874538149.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/setfit_model_independence_labelfaithful_epochs2_pipeline_en_5.5.0_3.0_1725874538149.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("setfit_model_independence_labelfaithful_epochs2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("setfit_model_independence_labelfaithful_epochs2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|setfit_model_independence_labelfaithful_epochs2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/mitra-mir/setfit_model_Independence_labelfaithful_epochs2 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-setfit_model_ireland_4labels_unbalanced_data_2epochs_en.md b/docs/_posts/ahmedlone127/2024-09-09-setfit_model_ireland_4labels_unbalanced_data_2epochs_en.md new file mode 100644 index 00000000000000..bb9a2936e49c26 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-setfit_model_ireland_4labels_unbalanced_data_2epochs_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English setfit_model_ireland_4labels_unbalanced_data_2epochs MPNetEmbeddings from mitra-mir +author: John Snow Labs +name: setfit_model_ireland_4labels_unbalanced_data_2epochs +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`setfit_model_ireland_4labels_unbalanced_data_2epochs` is a English model originally trained by mitra-mir. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/setfit_model_ireland_4labels_unbalanced_data_2epochs_en_5.5.0_3.0_1725897238532.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/setfit_model_ireland_4labels_unbalanced_data_2epochs_en_5.5.0_3.0_1725897238532.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("setfit_model_ireland_4labels_unbalanced_data_2epochs","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("setfit_model_ireland_4labels_unbalanced_data_2epochs","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|setfit_model_ireland_4labels_unbalanced_data_2epochs| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/mitra-mir/setfit-model-Ireland_4labels_unbalanced_data_2epochs \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-setfit_model_ireland_binary_label0_epochs2_feb_28_2023_en.md b/docs/_posts/ahmedlone127/2024-09-09-setfit_model_ireland_binary_label0_epochs2_feb_28_2023_en.md new file mode 100644 index 00000000000000..28b9a75daf0506 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-setfit_model_ireland_binary_label0_epochs2_feb_28_2023_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English setfit_model_ireland_binary_label0_epochs2_feb_28_2023 MPNetEmbeddings from mitra-mir +author: John Snow Labs +name: setfit_model_ireland_binary_label0_epochs2_feb_28_2023 +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`setfit_model_ireland_binary_label0_epochs2_feb_28_2023` is a English model originally trained by mitra-mir. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/setfit_model_ireland_binary_label0_epochs2_feb_28_2023_en_5.5.0_3.0_1725896546945.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/setfit_model_ireland_binary_label0_epochs2_feb_28_2023_en_5.5.0_3.0_1725896546945.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("setfit_model_ireland_binary_label0_epochs2_feb_28_2023","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("setfit_model_ireland_binary_label0_epochs2_feb_28_2023","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|setfit_model_ireland_binary_label0_epochs2_feb_28_2023| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/mitra-mir/setfit_model_Ireland_binary_label0_epochs2_Feb_28_2023 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-setfit_model_misinformation_on_media_traditional_social_en.md b/docs/_posts/ahmedlone127/2024-09-09-setfit_model_misinformation_on_media_traditional_social_en.md new file mode 100644 index 00000000000000..a1247d4296c734 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-setfit_model_misinformation_on_media_traditional_social_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English setfit_model_misinformation_on_media_traditional_social MPNetEmbeddings from mitra-mir +author: John Snow Labs +name: setfit_model_misinformation_on_media_traditional_social +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`setfit_model_misinformation_on_media_traditional_social` is a English model originally trained by mitra-mir. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/setfit_model_misinformation_on_media_traditional_social_en_5.5.0_3.0_1725874780927.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/setfit_model_misinformation_on_media_traditional_social_en_5.5.0_3.0_1725874780927.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("setfit_model_misinformation_on_media_traditional_social","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("setfit_model_misinformation_on_media_traditional_social","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|setfit_model_misinformation_on_media_traditional_social| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/mitra-mir/setfit-model-Misinformation-on-Media-Traditional-Social \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-setfit_product_review_regression_en.md b/docs/_posts/ahmedlone127/2024-09-09-setfit_product_review_regression_en.md new file mode 100644 index 00000000000000..ca62a30d6fc40d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-setfit_product_review_regression_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English setfit_product_review_regression MPNetEmbeddings from ivanzidov +author: John Snow Labs +name: setfit_product_review_regression +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`setfit_product_review_regression` is a English model originally trained by ivanzidov. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/setfit_product_review_regression_en_5.5.0_3.0_1725896405090.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/setfit_product_review_regression_en_5.5.0_3.0_1725896405090.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("setfit_product_review_regression","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("setfit_product_review_regression","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|setfit_product_review_regression| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.0 MB| + +## References + +https://huggingface.co/ivanzidov/setfit-product-review-regression \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-sinberto_si.md b/docs/_posts/ahmedlone127/2024-09-09-sinberto_si.md new file mode 100644 index 00000000000000..a1397e5e6cbc62 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-sinberto_si.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Sinhala, Sinhalese sinberto RoBertaEmbeddings from Kalindu +author: John Snow Labs +name: sinberto +date: 2024-09-09 +tags: [si, open_source, onnx, embeddings, roberta] +task: Embeddings +language: si +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sinberto` is a Sinhala, Sinhalese model originally trained by Kalindu. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sinberto_si_5.5.0_3.0_1725925344673.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sinberto_si_5.5.0_3.0_1725925344673.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("sinberto","si") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("sinberto","si") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sinberto| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|si| +|Size:|308.4 MB| + +## References + +https://huggingface.co/Kalindu/SinBerto \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-single_label_unbiased_relevant_profession_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-single_label_unbiased_relevant_profession_pipeline_en.md new file mode 100644 index 00000000000000..7fb0018a321bce --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-single_label_unbiased_relevant_profession_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English single_label_unbiased_relevant_profession_pipeline pipeline XlmRoBertaForSequenceClassification from ledigajobb +author: John Snow Labs +name: single_label_unbiased_relevant_profession_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`single_label_unbiased_relevant_profession_pipeline` is a English model originally trained by ledigajobb. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/single_label_unbiased_relevant_profession_pipeline_en_5.5.0_3.0_1725906965712.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/single_label_unbiased_relevant_profession_pipeline_en_5.5.0_3.0_1725906965712.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("single_label_unbiased_relevant_profession_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("single_label_unbiased_relevant_profession_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|single_label_unbiased_relevant_profession_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|881.8 MB| + +## References + +https://huggingface.co/ledigajobb/single_label_unbiased_relevant_profession + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-small_8_c_pipeline_mr.md b/docs/_posts/ahmedlone127/2024-09-09-small_8_c_pipeline_mr.md new file mode 100644 index 00000000000000..a8976a48d667cc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-small_8_c_pipeline_mr.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Marathi small_8_c_pipeline pipeline WhisperForCTC from simran14 +author: John Snow Labs +name: small_8_c_pipeline +date: 2024-09-09 +tags: [mr, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: mr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`small_8_c_pipeline` is a Marathi model originally trained by simran14. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/small_8_c_pipeline_mr_5.5.0_3.0_1725847951644.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/small_8_c_pipeline_mr_5.5.0_3.0_1725847951644.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("small_8_c_pipeline", lang = "mr") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("small_8_c_pipeline", lang = "mr") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|small_8_c_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|mr| +|Size:|1.7 GB| + +## References + +https://huggingface.co/simran14/small_8_c + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-southern_sotho_all_mpnet_finetuned_arabic_2481_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-southern_sotho_all_mpnet_finetuned_arabic_2481_pipeline_en.md new file mode 100644 index 00000000000000..da61e4b816f370 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-southern_sotho_all_mpnet_finetuned_arabic_2481_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English southern_sotho_all_mpnet_finetuned_arabic_2481_pipeline pipeline MPNetEmbeddings from danfeg +author: John Snow Labs +name: southern_sotho_all_mpnet_finetuned_arabic_2481_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`southern_sotho_all_mpnet_finetuned_arabic_2481_pipeline` is a English model originally trained by danfeg. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/southern_sotho_all_mpnet_finetuned_arabic_2481_pipeline_en_5.5.0_3.0_1725875142239.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/southern_sotho_all_mpnet_finetuned_arabic_2481_pipeline_en_5.5.0_3.0_1725875142239.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("southern_sotho_all_mpnet_finetuned_arabic_2481_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("southern_sotho_all_mpnet_finetuned_arabic_2481_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|southern_sotho_all_mpnet_finetuned_arabic_2481_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/danfeg/ST-ALL-MPNET_Finetuned-AR-2481 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-spanish_catalan_en.md b/docs/_posts/ahmedlone127/2024-09-09-spanish_catalan_en.md new file mode 100644 index 00000000000000..03d1aff31ee7bc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-spanish_catalan_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English spanish_catalan MarianTransformer from Ife +author: John Snow Labs +name: spanish_catalan +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`spanish_catalan` is a English model originally trained by Ife. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/spanish_catalan_en_5.5.0_3.0_1725863281968.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/spanish_catalan_en_5.5.0_3.0_1725863281968.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("spanish_catalan","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("spanish_catalan","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|spanish_catalan| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|466.2 MB| + +## References + +https://huggingface.co/Ife/ES-CA \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-spanish_spanglish_en.md b/docs/_posts/ahmedlone127/2024-09-09-spanish_spanglish_en.md new file mode 100644 index 00000000000000..86797489fc0006 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-spanish_spanglish_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English spanish_spanglish MarianTransformer from drewcurran +author: John Snow Labs +name: spanish_spanglish +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`spanish_spanglish` is a English model originally trained by drewcurran. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/spanish_spanglish_en_5.5.0_3.0_1725913611896.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/spanish_spanglish_en_5.5.0_3.0_1725913611896.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("spanish_spanglish","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("spanish_spanglish","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|spanish_spanglish| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|539.9 MB| + +## References + +https://huggingface.co/drewcurran/spanish-spanglish \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-spea_4_en.md b/docs/_posts/ahmedlone127/2024-09-09-spea_4_en.md new file mode 100644 index 00000000000000..a56a89b38eed01 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-spea_4_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English spea_4 RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: spea_4 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`spea_4` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/spea_4_en_5.5.0_3.0_1725902359785.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/spea_4_en_5.5.0_3.0_1725902359785.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("spea_4","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("spea_4", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|spea_4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Spea_4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-squad_qa_model_horyekhunley_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-squad_qa_model_horyekhunley_pipeline_en.md new file mode 100644 index 00000000000000..5319b98cef7b9e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-squad_qa_model_horyekhunley_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English squad_qa_model_horyekhunley_pipeline pipeline DistilBertForQuestionAnswering from horyekhunley +author: John Snow Labs +name: squad_qa_model_horyekhunley_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`squad_qa_model_horyekhunley_pipeline` is a English model originally trained by horyekhunley. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/squad_qa_model_horyekhunley_pipeline_en_5.5.0_3.0_1725876979952.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/squad_qa_model_horyekhunley_pipeline_en_5.5.0_3.0_1725876979952.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("squad_qa_model_horyekhunley_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("squad_qa_model_horyekhunley_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|squad_qa_model_horyekhunley_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/horyekhunley/squad_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-task_subtle_task__model_deberta__aug_method_rsa_en.md b/docs/_posts/ahmedlone127/2024-09-09-task_subtle_task__model_deberta__aug_method_rsa_en.md new file mode 100644 index 00000000000000..42a6c573529112 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-task_subtle_task__model_deberta__aug_method_rsa_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English task_subtle_task__model_deberta__aug_method_rsa DeBertaForSequenceClassification from BenjaminOcampo +author: John Snow Labs +name: task_subtle_task__model_deberta__aug_method_rsa +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, deberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DeBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`task_subtle_task__model_deberta__aug_method_rsa` is a English model originally trained by BenjaminOcampo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/task_subtle_task__model_deberta__aug_method_rsa_en_5.5.0_3.0_1725859901966.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/task_subtle_task__model_deberta__aug_method_rsa_en_5.5.0_3.0_1725859901966.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = DeBertaForSequenceClassification.pretrained("task_subtle_task__model_deberta__aug_method_rsa","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = DeBertaForSequenceClassification.pretrained("task_subtle_task__model_deberta__aug_method_rsa", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|task_subtle_task__model_deberta__aug_method_rsa| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|607.5 MB| + +## References + +https://huggingface.co/BenjaminOcampo/task-subtle_task__model-deberta__aug_method-rsa \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-task_subtle_task__model_deberta__aug_method_rsa_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-task_subtle_task__model_deberta__aug_method_rsa_pipeline_en.md new file mode 100644 index 00000000000000..99c2f4f6b1d44d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-task_subtle_task__model_deberta__aug_method_rsa_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English task_subtle_task__model_deberta__aug_method_rsa_pipeline pipeline DeBertaForSequenceClassification from BenjaminOcampo +author: John Snow Labs +name: task_subtle_task__model_deberta__aug_method_rsa_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DeBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`task_subtle_task__model_deberta__aug_method_rsa_pipeline` is a English model originally trained by BenjaminOcampo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/task_subtle_task__model_deberta__aug_method_rsa_pipeline_en_5.5.0_3.0_1725859937186.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/task_subtle_task__model_deberta__aug_method_rsa_pipeline_en_5.5.0_3.0_1725859937186.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("task_subtle_task__model_deberta__aug_method_rsa_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("task_subtle_task__model_deberta__aug_method_rsa_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|task_subtle_task__model_deberta__aug_method_rsa_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|607.5 MB| + +## References + +https://huggingface.co/BenjaminOcampo/task-subtle_task__model-deberta__aug_method-rsa + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DeBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-tat_model_en.md b/docs/_posts/ahmedlone127/2024-09-09-tat_model_en.md new file mode 100644 index 00000000000000..174356db6c63a2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-tat_model_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English tat_model MPNetEmbeddings from mathislucka +author: John Snow Labs +name: tat_model +date: 2024-09-09 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`tat_model` is a English model originally trained by mathislucka. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/tat_model_en_5.5.0_3.0_1725896812362.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/tat_model_en_5.5.0_3.0_1725896812362.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("tat_model","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("tat_model","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|tat_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/mathislucka/tat-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-team4_adr_detector_en.md b/docs/_posts/ahmedlone127/2024-09-09-team4_adr_detector_en.md new file mode 100644 index 00000000000000..e24835fa7fddfb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-team4_adr_detector_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English team4_adr_detector RoBertaForSequenceClassification from MSBATeam4 +author: John Snow Labs +name: team4_adr_detector +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`team4_adr_detector` is a English model originally trained by MSBATeam4. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/team4_adr_detector_en_5.5.0_3.0_1725920549742.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/team4_adr_detector_en_5.5.0_3.0_1725920549742.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("team4_adr_detector","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("team4_adr_detector", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|team4_adr_detector| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/MSBATeam4/Team4_ADR_Detector \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-temp_model_en.md b/docs/_posts/ahmedlone127/2024-09-09-temp_model_en.md new file mode 100644 index 00000000000000..06bba467e80aaf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-temp_model_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English temp_model RoBertaForSequenceClassification from gsdas +author: John Snow Labs +name: temp_model +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`temp_model` is a English model originally trained by gsdas. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/temp_model_en_5.5.0_3.0_1725912509502.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/temp_model_en_5.5.0_3.0_1725912509502.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("temp_model","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("temp_model", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|temp_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/gsdas/temp_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-temp_model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-temp_model_pipeline_en.md new file mode 100644 index 00000000000000..58be4ebf08bd5c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-temp_model_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English temp_model_pipeline pipeline RoBertaForSequenceClassification from gsdas +author: John Snow Labs +name: temp_model_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`temp_model_pipeline` is a English model originally trained by gsdas. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/temp_model_pipeline_en_5.5.0_3.0_1725912587967.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/temp_model_pipeline_en_5.5.0_3.0_1725912587967.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("temp_model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("temp_model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|temp_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/gsdas/temp_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-test_model_tianyi_zhang_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-test_model_tianyi_zhang_pipeline_en.md new file mode 100644 index 00000000000000..b91ac71f2d44ae --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-test_model_tianyi_zhang_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English test_model_tianyi_zhang_pipeline pipeline DistilBertForSequenceClassification from Tianyi-Zhang +author: John Snow Labs +name: test_model_tianyi_zhang_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`test_model_tianyi_zhang_pipeline` is a English model originally trained by Tianyi-Zhang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/test_model_tianyi_zhang_pipeline_en_5.5.0_3.0_1725873280803.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/test_model_tianyi_zhang_pipeline_en_5.5.0_3.0_1725873280803.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("test_model_tianyi_zhang_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("test_model_tianyi_zhang_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|test_model_tianyi_zhang_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/Tianyi-Zhang/test_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-test_robeta_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-test_robeta_pipeline_en.md new file mode 100644 index 00000000000000..1ad032df413154 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-test_robeta_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English test_robeta_pipeline pipeline RoBertaEmbeddings from AndrewYan +author: John Snow Labs +name: test_robeta_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`test_robeta_pipeline` is a English model originally trained by AndrewYan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/test_robeta_pipeline_en_5.5.0_3.0_1725860701159.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/test_robeta_pipeline_en_5.5.0_3.0_1725860701159.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("test_robeta_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("test_robeta_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|test_robeta_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|298.2 MB| + +## References + +https://huggingface.co/AndrewYan/test_robeta + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-testest_ar.md b/docs/_posts/ahmedlone127/2024-09-09-testest_ar.md new file mode 100644 index 00000000000000..372c05959b406d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-testest_ar.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Arabic testest MarianTransformer from wingo-dz +author: John Snow Labs +name: testest +date: 2024-09-09 +tags: [ar, open_source, onnx, translation, marian] +task: Translation +language: ar +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`testest` is a Arabic model originally trained by wingo-dz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/testest_ar_5.5.0_3.0_1725913653098.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/testest_ar_5.5.0_3.0_1725913653098.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("testest","ar") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("testest","ar") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|testest| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|ar| +|Size:|527.3 MB| + +## References + +https://huggingface.co/wingo-dz/testest \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-testest_pipeline_ar.md b/docs/_posts/ahmedlone127/2024-09-09-testest_pipeline_ar.md new file mode 100644 index 00000000000000..7499dff9e77606 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-testest_pipeline_ar.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Arabic testest_pipeline pipeline MarianTransformer from wingo-dz +author: John Snow Labs +name: testest_pipeline +date: 2024-09-09 +tags: [ar, open_source, pipeline, onnx] +task: Translation +language: ar +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`testest_pipeline` is a Arabic model originally trained by wingo-dz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/testest_pipeline_ar_5.5.0_3.0_1725913679049.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/testest_pipeline_ar_5.5.0_3.0_1725913679049.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("testest_pipeline", lang = "ar") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("testest_pipeline", lang = "ar") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|testest_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|ar| +|Size:|527.9 MB| + +## References + +https://huggingface.co/wingo-dz/testest + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-textfooler_roberta_base_rte_5_en.md b/docs/_posts/ahmedlone127/2024-09-09-textfooler_roberta_base_rte_5_en.md new file mode 100644 index 00000000000000..3e920ce2989157 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-textfooler_roberta_base_rte_5_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English textfooler_roberta_base_rte_5 RoBertaForSequenceClassification from korca +author: John Snow Labs +name: textfooler_roberta_base_rte_5 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`textfooler_roberta_base_rte_5` is a English model originally trained by korca. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/textfooler_roberta_base_rte_5_en_5.5.0_3.0_1725911635175.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/textfooler_roberta_base_rte_5_en_5.5.0_3.0_1725911635175.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("textfooler_roberta_base_rte_5","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("textfooler_roberta_base_rte_5", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|textfooler_roberta_base_rte_5| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|449.2 MB| + +## References + +https://huggingface.co/korca/textfooler-roberta-base-rte-5 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-textfooler_roberta_base_rte_5_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-textfooler_roberta_base_rte_5_pipeline_en.md new file mode 100644 index 00000000000000..2adfb2e49a79a4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-textfooler_roberta_base_rte_5_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English textfooler_roberta_base_rte_5_pipeline pipeline RoBertaForSequenceClassification from korca +author: John Snow Labs +name: textfooler_roberta_base_rte_5_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`textfooler_roberta_base_rte_5_pipeline` is a English model originally trained by korca. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/textfooler_roberta_base_rte_5_pipeline_en_5.5.0_3.0_1725911660823.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/textfooler_roberta_base_rte_5_pipeline_en_5.5.0_3.0_1725911660823.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("textfooler_roberta_base_rte_5_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("textfooler_roberta_base_rte_5_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|textfooler_roberta_base_rte_5_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|449.2 MB| + +## References + +https://huggingface.co/korca/textfooler-roberta-base-rte-5 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-tinybert_emotion_balanced_en.md b/docs/_posts/ahmedlone127/2024-09-09-tinybert_emotion_balanced_en.md new file mode 100644 index 00000000000000..9133b48ef762ef --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-tinybert_emotion_balanced_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English tinybert_emotion_balanced BertForSequenceClassification from AdamCodd +author: John Snow Labs +name: tinybert_emotion_balanced +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`tinybert_emotion_balanced` is a English model originally trained by AdamCodd. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/tinybert_emotion_balanced_en_5.5.0_3.0_1725900102797.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/tinybert_emotion_balanced_en_5.5.0_3.0_1725900102797.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("tinybert_emotion_balanced","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("tinybert_emotion_balanced", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|tinybert_emotion_balanced| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|16.7 MB| + +## References + +https://huggingface.co/AdamCodd/tinybert-emotion-balanced \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-tinybert_emotion_balanced_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-tinybert_emotion_balanced_pipeline_en.md new file mode 100644 index 00000000000000..97903faeba3d4d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-tinybert_emotion_balanced_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English tinybert_emotion_balanced_pipeline pipeline BertForSequenceClassification from AdamCodd +author: John Snow Labs +name: tinybert_emotion_balanced_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`tinybert_emotion_balanced_pipeline` is a English model originally trained by AdamCodd. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/tinybert_emotion_balanced_pipeline_en_5.5.0_3.0_1725900104061.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/tinybert_emotion_balanced_pipeline_en_5.5.0_3.0_1725900104061.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("tinybert_emotion_balanced_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("tinybert_emotion_balanced_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|tinybert_emotion_balanced_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|16.8 MB| + +## References + +https://huggingface.co/AdamCodd/tinybert-emotion-balanced + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-tinybert_sst2_en.md b/docs/_posts/ahmedlone127/2024-09-09-tinybert_sst2_en.md new file mode 100644 index 00000000000000..bdc1dbdda63b6f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-tinybert_sst2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English tinybert_sst2 BertForSequenceClassification from Vishnou +author: John Snow Labs +name: tinybert_sst2 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`tinybert_sst2` is a English model originally trained by Vishnou. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/tinybert_sst2_en_5.5.0_3.0_1725856775547.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/tinybert_sst2_en_5.5.0_3.0_1725856775547.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("tinybert_sst2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("tinybert_sst2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|tinybert_sst2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|54.2 MB| + +## References + +https://huggingface.co/Vishnou/TinyBERT_SST2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-tmp_trainer_juncodh_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-tmp_trainer_juncodh_pipeline_en.md new file mode 100644 index 00000000000000..e5b9ade3c2a94d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-tmp_trainer_juncodh_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English tmp_trainer_juncodh_pipeline pipeline RoBertaForQuestionAnswering from Juncodh +author: John Snow Labs +name: tmp_trainer_juncodh_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`tmp_trainer_juncodh_pipeline` is a English model originally trained by Juncodh. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/tmp_trainer_juncodh_pipeline_en_5.5.0_3.0_1725876575691.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/tmp_trainer_juncodh_pipeline_en_5.5.0_3.0_1725876575691.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("tmp_trainer_juncodh_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("tmp_trainer_juncodh_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|tmp_trainer_juncodh_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|306.2 MB| + +## References + +https://huggingface.co/Juncodh/tmp_trainer + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-topic_antitrust_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-topic_antitrust_pipeline_en.md new file mode 100644 index 00000000000000..246eac8d2f68c7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-topic_antitrust_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English topic_antitrust_pipeline pipeline RoBertaForSequenceClassification from dell-research-harvard +author: John Snow Labs +name: topic_antitrust_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`topic_antitrust_pipeline` is a English model originally trained by dell-research-harvard. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/topic_antitrust_pipeline_en_5.5.0_3.0_1725904492877.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/topic_antitrust_pipeline_en_5.5.0_3.0_1725904492877.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("topic_antitrust_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("topic_antitrust_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|topic_antitrust_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/dell-research-harvard/topic-antitrust + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-topic_topic_random0_seed2_twitter_roberta_base_dec2020_en.md b/docs/_posts/ahmedlone127/2024-09-09-topic_topic_random0_seed2_twitter_roberta_base_dec2020_en.md new file mode 100644 index 00000000000000..4826ed102fdc2c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-topic_topic_random0_seed2_twitter_roberta_base_dec2020_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English topic_topic_random0_seed2_twitter_roberta_base_dec2020 RoBertaForSequenceClassification from tweettemposhift +author: John Snow Labs +name: topic_topic_random0_seed2_twitter_roberta_base_dec2020 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`topic_topic_random0_seed2_twitter_roberta_base_dec2020` is a English model originally trained by tweettemposhift. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/topic_topic_random0_seed2_twitter_roberta_base_dec2020_en_5.5.0_3.0_1725904501962.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/topic_topic_random0_seed2_twitter_roberta_base_dec2020_en_5.5.0_3.0_1725904501962.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("topic_topic_random0_seed2_twitter_roberta_base_dec2020","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("topic_topic_random0_seed2_twitter_roberta_base_dec2020", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|topic_topic_random0_seed2_twitter_roberta_base_dec2020| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.4 MB| + +## References + +https://huggingface.co/tweettemposhift/topic-topic_random0_seed2-twitter-roberta-base-dec2020 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-topic_topic_random0_seed2_twitter_roberta_base_dec2020_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-topic_topic_random0_seed2_twitter_roberta_base_dec2020_pipeline_en.md new file mode 100644 index 00000000000000..e5862db29857d6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-topic_topic_random0_seed2_twitter_roberta_base_dec2020_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English topic_topic_random0_seed2_twitter_roberta_base_dec2020_pipeline pipeline RoBertaForSequenceClassification from tweettemposhift +author: John Snow Labs +name: topic_topic_random0_seed2_twitter_roberta_base_dec2020_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`topic_topic_random0_seed2_twitter_roberta_base_dec2020_pipeline` is a English model originally trained by tweettemposhift. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/topic_topic_random0_seed2_twitter_roberta_base_dec2020_pipeline_en_5.5.0_3.0_1725904527867.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/topic_topic_random0_seed2_twitter_roberta_base_dec2020_pipeline_en_5.5.0_3.0_1725904527867.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("topic_topic_random0_seed2_twitter_roberta_base_dec2020_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("topic_topic_random0_seed2_twitter_roberta_base_dec2020_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|topic_topic_random0_seed2_twitter_roberta_base_dec2020_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.4 MB| + +## References + +https://huggingface.co/tweettemposhift/topic-topic_random0_seed2-twitter-roberta-base-dec2020 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-topic_topic_random2_seed0_bertweet_large_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-topic_topic_random2_seed0_bertweet_large_pipeline_en.md new file mode 100644 index 00000000000000..4d7d2499e0384a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-topic_topic_random2_seed0_bertweet_large_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English topic_topic_random2_seed0_bertweet_large_pipeline pipeline RoBertaForSequenceClassification from tweettemposhift +author: John Snow Labs +name: topic_topic_random2_seed0_bertweet_large_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`topic_topic_random2_seed0_bertweet_large_pipeline` is a English model originally trained by tweettemposhift. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/topic_topic_random2_seed0_bertweet_large_pipeline_en_5.5.0_3.0_1725911686300.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/topic_topic_random2_seed0_bertweet_large_pipeline_en_5.5.0_3.0_1725911686300.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("topic_topic_random2_seed0_bertweet_large_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("topic_topic_random2_seed0_bertweet_large_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|topic_topic_random2_seed0_bertweet_large_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/tweettemposhift/topic-topic_random2_seed0-bertweet-large + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-trans_encoder_cross_simcse_roberta_base_en.md b/docs/_posts/ahmedlone127/2024-09-09-trans_encoder_cross_simcse_roberta_base_en.md new file mode 100644 index 00000000000000..addd0bf4713871 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-trans_encoder_cross_simcse_roberta_base_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English trans_encoder_cross_simcse_roberta_base RoBertaForSequenceClassification from cambridgeltl +author: John Snow Labs +name: trans_encoder_cross_simcse_roberta_base +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`trans_encoder_cross_simcse_roberta_base` is a English model originally trained by cambridgeltl. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/trans_encoder_cross_simcse_roberta_base_en_5.5.0_3.0_1725904464990.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/trans_encoder_cross_simcse_roberta_base_en_5.5.0_3.0_1725904464990.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("trans_encoder_cross_simcse_roberta_base","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("trans_encoder_cross_simcse_roberta_base", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|trans_encoder_cross_simcse_roberta_base| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|456.1 MB| + +## References + +https://huggingface.co/cambridgeltl/trans-encoder-cross-simcse-roberta-base \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-trans_encoder_cross_simcse_roberta_base_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-trans_encoder_cross_simcse_roberta_base_pipeline_en.md new file mode 100644 index 00000000000000..3aa28ee7c05371 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-trans_encoder_cross_simcse_roberta_base_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English trans_encoder_cross_simcse_roberta_base_pipeline pipeline RoBertaForSequenceClassification from cambridgeltl +author: John Snow Labs +name: trans_encoder_cross_simcse_roberta_base_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`trans_encoder_cross_simcse_roberta_base_pipeline` is a English model originally trained by cambridgeltl. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/trans_encoder_cross_simcse_roberta_base_pipeline_en_5.5.0_3.0_1725904489811.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/trans_encoder_cross_simcse_roberta_base_pipeline_en_5.5.0_3.0_1725904489811.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("trans_encoder_cross_simcse_roberta_base_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("trans_encoder_cross_simcse_roberta_base_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|trans_encoder_cross_simcse_roberta_base_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|456.2 MB| + +## References + +https://huggingface.co/cambridgeltl/trans-encoder-cross-simcse-roberta-base + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-translation_english_lug_v4_en.md b/docs/_posts/ahmedlone127/2024-09-09-translation_english_lug_v4_en.md new file mode 100644 index 00000000000000..636833060ee202 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-translation_english_lug_v4_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English translation_english_lug_v4 MarianTransformer from atwine +author: John Snow Labs +name: translation_english_lug_v4 +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`translation_english_lug_v4` is a English model originally trained by atwine. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/translation_english_lug_v4_en_5.5.0_3.0_1725914167757.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/translation_english_lug_v4_en_5.5.0_3.0_1725914167757.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("translation_english_lug_v4","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("translation_english_lug_v4","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|translation_english_lug_v4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|513.3 MB| + +## References + +https://huggingface.co/atwine/translation-en-lug-v4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-translation_english_lug_v4_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-translation_english_lug_v4_pipeline_en.md new file mode 100644 index 00000000000000..415b91d67ec20f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-translation_english_lug_v4_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English translation_english_lug_v4_pipeline pipeline MarianTransformer from atwine +author: John Snow Labs +name: translation_english_lug_v4_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`translation_english_lug_v4_pipeline` is a English model originally trained by atwine. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/translation_english_lug_v4_pipeline_en_5.5.0_3.0_1725914193571.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/translation_english_lug_v4_pipeline_en_5.5.0_3.0_1725914193571.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("translation_english_lug_v4_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("translation_english_lug_v4_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|translation_english_lug_v4_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|513.9 MB| + +## References + +https://huggingface.co/atwine/translation-en-lug-v4 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-translation_english_lug_v5_en.md b/docs/_posts/ahmedlone127/2024-09-09-translation_english_lug_v5_en.md new file mode 100644 index 00000000000000..cdaa0b0d129fd9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-translation_english_lug_v5_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English translation_english_lug_v5 MarianTransformer from atwine +author: John Snow Labs +name: translation_english_lug_v5 +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`translation_english_lug_v5` is a English model originally trained by atwine. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/translation_english_lug_v5_en_5.5.0_3.0_1725913122157.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/translation_english_lug_v5_en_5.5.0_3.0_1725913122157.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("translation_english_lug_v5","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("translation_english_lug_v5","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|translation_english_lug_v5| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|513.3 MB| + +## References + +https://huggingface.co/atwine/translation-en-lug-v5 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-translation_english_lug_v5_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-translation_english_lug_v5_pipeline_en.md new file mode 100644 index 00000000000000..eb44e2186157a1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-translation_english_lug_v5_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English translation_english_lug_v5_pipeline pipeline MarianTransformer from atwine +author: John Snow Labs +name: translation_english_lug_v5_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`translation_english_lug_v5_pipeline` is a English model originally trained by atwine. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/translation_english_lug_v5_pipeline_en_5.5.0_3.0_1725913147940.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/translation_english_lug_v5_pipeline_en_5.5.0_3.0_1725913147940.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("translation_english_lug_v5_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("translation_english_lug_v5_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|translation_english_lug_v5_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|513.9 MB| + +## References + +https://huggingface.co/atwine/translation-en-lug-v5 + +## Included Models + +- DocumentAssembler +- SentenceDetectorDLModel +- MarianTransformer \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-translation_model_english_korean_en.md b/docs/_posts/ahmedlone127/2024-09-09-translation_model_english_korean_en.md new file mode 100644 index 00000000000000..d158ea5ebb320b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-translation_model_english_korean_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English translation_model_english_korean MarianTransformer from pellucid +author: John Snow Labs +name: translation_model_english_korean +date: 2024-09-09 +tags: [en, open_source, onnx, translation, marian] +task: Translation +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MarianTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MarianTransformer model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`translation_model_english_korean` is a English model originally trained by pellucid. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/translation_model_english_korean_en_5.5.0_3.0_1725913139130.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/translation_model_english_korean_en_5.5.0_3.0_1725913139130.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +sentenceDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \ + .setInputCols(["document"]) \ + .setOutputCol("translation") + +marian = MarianTransformer.pretrained("translation_model_english_korean","en") \ + .setInputCols(["sentence"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, sentenceDL, marian]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val marian = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") + .setInputCols(Array("document")) + .setOutputCol("sentence") + +val embeddings = MarianTransformer.pretrained("translation_model_english_korean","en") + .setInputCols(Array("sentence")) + .setOutputCol("translation") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, sentenceDL, marian)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|translation_model_english_korean| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[sentences]| +|Output Labels:|[translation]| +|Language:|en| +|Size:|1.0 GB| + +## References + +https://huggingface.co/pellucid/translation_model_en_ko \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-trial_model_debasmita_en.md b/docs/_posts/ahmedlone127/2024-09-09-trial_model_debasmita_en.md new file mode 100644 index 00000000000000..fd6d1b37ddd09f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-trial_model_debasmita_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English trial_model_debasmita RoBertaForSequenceClassification from debasmita +author: John Snow Labs +name: trial_model_debasmita +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`trial_model_debasmita` is a English model originally trained by debasmita. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/trial_model_debasmita_en_5.5.0_3.0_1725903682475.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/trial_model_debasmita_en_5.5.0_3.0_1725903682475.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("trial_model_debasmita","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("trial_model_debasmita", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|trial_model_debasmita| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|416.3 MB| + +## References + +https://huggingface.co/debasmita/trial-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_base_efl_hateval_en.md b/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_base_efl_hateval_en.md new file mode 100644 index 00000000000000..e162fc16e6e3ed --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_base_efl_hateval_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English twitter_roberta_base_efl_hateval RoBertaForSequenceClassification from ChrisZeng +author: John Snow Labs +name: twitter_roberta_base_efl_hateval +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`twitter_roberta_base_efl_hateval` is a English model originally trained by ChrisZeng. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/twitter_roberta_base_efl_hateval_en_5.5.0_3.0_1725911286326.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/twitter_roberta_base_efl_hateval_en_5.5.0_3.0_1725911286326.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("twitter_roberta_base_efl_hateval","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("twitter_roberta_base_efl_hateval", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|twitter_roberta_base_efl_hateval| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.3 MB| + +## References + +https://huggingface.co/ChrisZeng/twitter-roberta-base-efl-hateval \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_base_efl_hateval_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_base_efl_hateval_pipeline_en.md new file mode 100644 index 00000000000000..5d5ca8d5224c59 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_base_efl_hateval_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English twitter_roberta_base_efl_hateval_pipeline pipeline RoBertaForSequenceClassification from ChrisZeng +author: John Snow Labs +name: twitter_roberta_base_efl_hateval_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`twitter_roberta_base_efl_hateval_pipeline` is a English model originally trained by ChrisZeng. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/twitter_roberta_base_efl_hateval_pipeline_en_5.5.0_3.0_1725911310275.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/twitter_roberta_base_efl_hateval_pipeline_en_5.5.0_3.0_1725911310275.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("twitter_roberta_base_efl_hateval_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("twitter_roberta_base_efl_hateval_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|twitter_roberta_base_efl_hateval_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.3 MB| + +## References + +https://huggingface.co/ChrisZeng/twitter-roberta-base-efl-hateval + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_large_intimacy_latest_en.md b/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_large_intimacy_latest_en.md new file mode 100644 index 00000000000000..1d44c868615ff8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_large_intimacy_latest_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English twitter_roberta_large_intimacy_latest RoBertaForSequenceClassification from cardiffnlp +author: John Snow Labs +name: twitter_roberta_large_intimacy_latest +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`twitter_roberta_large_intimacy_latest` is a English model originally trained by cardiffnlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/twitter_roberta_large_intimacy_latest_en_5.5.0_3.0_1725920750891.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/twitter_roberta_large_intimacy_latest_en_5.5.0_3.0_1725920750891.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("twitter_roberta_large_intimacy_latest","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("twitter_roberta_large_intimacy_latest", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|twitter_roberta_large_intimacy_latest| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/cardiffnlp/twitter-roberta-large-intimacy-latest \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_large_nerd_latest_en.md b/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_large_nerd_latest_en.md new file mode 100644 index 00000000000000..fc567bf3ce2c88 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_large_nerd_latest_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English twitter_roberta_large_nerd_latest RoBertaForSequenceClassification from cardiffnlp +author: John Snow Labs +name: twitter_roberta_large_nerd_latest +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`twitter_roberta_large_nerd_latest` is a English model originally trained by cardiffnlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/twitter_roberta_large_nerd_latest_en_5.5.0_3.0_1725902414176.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/twitter_roberta_large_nerd_latest_en_5.5.0_3.0_1725902414176.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("twitter_roberta_large_nerd_latest","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("twitter_roberta_large_nerd_latest", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|twitter_roberta_large_nerd_latest| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/cardiffnlp/twitter-roberta-large-nerd-latest \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_large_nerd_latest_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_large_nerd_latest_pipeline_en.md new file mode 100644 index 00000000000000..0d8a35c7a15f10 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-twitter_roberta_large_nerd_latest_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English twitter_roberta_large_nerd_latest_pipeline pipeline RoBertaForSequenceClassification from cardiffnlp +author: John Snow Labs +name: twitter_roberta_large_nerd_latest_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`twitter_roberta_large_nerd_latest_pipeline` is a English model originally trained by cardiffnlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/twitter_roberta_large_nerd_latest_pipeline_en_5.5.0_3.0_1725902479775.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/twitter_roberta_large_nerd_latest_pipeline_en_5.5.0_3.0_1725902479775.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("twitter_roberta_large_nerd_latest_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("twitter_roberta_large_nerd_latest_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|twitter_roberta_large_nerd_latest_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/cardiffnlp/twitter-roberta-large-nerd-latest + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-twitter_xlm_roberta_base_en.md b/docs/_posts/ahmedlone127/2024-09-09-twitter_xlm_roberta_base_en.md new file mode 100644 index 00000000000000..b4ccdcd0d44350 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-twitter_xlm_roberta_base_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English twitter_xlm_roberta_base XlmRoBertaForSequenceClassification from RICHARDMENSAH +author: John Snow Labs +name: twitter_xlm_roberta_base +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`twitter_xlm_roberta_base` is a English model originally trained by RICHARDMENSAH. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/twitter_xlm_roberta_base_en_5.5.0_3.0_1725907896219.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/twitter_xlm_roberta_base_en_5.5.0_3.0_1725907896219.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("twitter_xlm_roberta_base","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("twitter_xlm_roberta_base", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|twitter_xlm_roberta_base| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.0 GB| + +## References + +https://huggingface.co/RICHARDMENSAH/twitter_xlm_roberta_base \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-vietnamese_sentimental_analysis_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-vietnamese_sentimental_analysis_pipeline_en.md new file mode 100644 index 00000000000000..9560d2cd566344 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-vietnamese_sentimental_analysis_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English vietnamese_sentimental_analysis_pipeline pipeline DistilBertForSequenceClassification from thanhchauns2 +author: John Snow Labs +name: vietnamese_sentimental_analysis_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`vietnamese_sentimental_analysis_pipeline` is a English model originally trained by thanhchauns2. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/vietnamese_sentimental_analysis_pipeline_en_5.5.0_3.0_1725872984019.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/vietnamese_sentimental_analysis_pipeline_en_5.5.0_3.0_1725872984019.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("vietnamese_sentimental_analysis_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("vietnamese_sentimental_analysis_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|vietnamese_sentimental_analysis_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|249.5 MB| + +## References + +https://huggingface.co/thanhchauns2/vietnamese-sentimental-analysis + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-whispasr_pipeline_hi.md b/docs/_posts/ahmedlone127/2024-09-09-whispasr_pipeline_hi.md new file mode 100644 index 00000000000000..afc85f672c80ff --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-whispasr_pipeline_hi.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Hindi whispasr_pipeline pipeline WhisperForCTC from Rithik101 +author: John Snow Labs +name: whispasr_pipeline +date: 2024-09-09 +tags: [hi, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: hi +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whispasr_pipeline` is a Hindi model originally trained by Rithik101. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whispasr_pipeline_hi_5.5.0_3.0_1725844951714.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whispasr_pipeline_hi_5.5.0_3.0_1725844951714.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("whispasr_pipeline", lang = "hi") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("whispasr_pipeline", lang = "hi") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whispasr_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|hi| +|Size:|1.7 GB| + +## References + +https://huggingface.co/Rithik101/WhispASR + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-whisper_base_lithuanian_pipeline_lt.md b/docs/_posts/ahmedlone127/2024-09-09-whisper_base_lithuanian_pipeline_lt.md new file mode 100644 index 00000000000000..ae940f7575db9d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-whisper_base_lithuanian_pipeline_lt.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Lithuanian whisper_base_lithuanian_pipeline pipeline WhisperForCTC from Aismantas +author: John Snow Labs +name: whisper_base_lithuanian_pipeline +date: 2024-09-09 +tags: [lt, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: lt +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_base_lithuanian_pipeline` is a Lithuanian model originally trained by Aismantas. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_base_lithuanian_pipeline_lt_5.5.0_3.0_1725846217691.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_base_lithuanian_pipeline_lt_5.5.0_3.0_1725846217691.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("whisper_base_lithuanian_pipeline", lang = "lt") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("whisper_base_lithuanian_pipeline", lang = "lt") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_base_lithuanian_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|lt| +|Size:|642.0 MB| + +## References + +https://huggingface.co/Aismantas/whisper-base-lithuanian + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-wolof_qa_model_a_en.md b/docs/_posts/ahmedlone127/2024-09-09-wolof_qa_model_a_en.md new file mode 100644 index 00000000000000..8c288a19a41c78 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-wolof_qa_model_a_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English wolof_qa_model_a DistilBertForQuestionAnswering from gjonesQ02 +author: John Snow Labs +name: wolof_qa_model_a +date: 2024-09-09 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`wolof_qa_model_a` is a English model originally trained by gjonesQ02. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/wolof_qa_model_a_en_5.5.0_3.0_1725876953579.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/wolof_qa_model_a_en_5.5.0_3.0_1725876953579.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("wolof_qa_model_a","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("wolof_qa_model_a", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|wolof_qa_model_a| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/gjonesQ02/wo_QA_Model_A \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_r_galen_caresa_es.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_r_galen_caresa_es.md new file mode 100644 index 00000000000000..0b7aacaff35812 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_r_galen_caresa_es.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Castilian, Spanish xlm_r_galen_caresa XlmRoBertaForSequenceClassification from IIC +author: John Snow Labs +name: xlm_r_galen_caresa +date: 2024-09-09 +tags: [es, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: es +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_r_galen_caresa` is a Castilian, Spanish model originally trained by IIC. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_r_galen_caresa_es_5.5.0_3.0_1725870568402.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_r_galen_caresa_es_5.5.0_3.0_1725870568402.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_r_galen_caresa","es") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_r_galen_caresa", "es") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_r_galen_caresa| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|es| +|Size:|1.0 GB| + +## References + +https://huggingface.co/IIC/XLM_R_Galen-caresA \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_english_sentweet_derogatory_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_english_sentweet_derogatory_en.md new file mode 100644 index 00000000000000..ba7535c8e57851 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_english_sentweet_derogatory_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_english_sentweet_derogatory XlmRoBertaForSequenceClassification from jayanta +author: John Snow Labs +name: xlm_roberta_base_english_sentweet_derogatory +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_english_sentweet_derogatory` is a English model originally trained by jayanta. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_english_sentweet_derogatory_en_5.5.0_3.0_1725870583201.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_english_sentweet_derogatory_en_5.5.0_3.0_1725870583201.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_base_english_sentweet_derogatory","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_base_english_sentweet_derogatory", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_english_sentweet_derogatory| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|785.8 MB| + +## References + +https://huggingface.co/jayanta/xlm-roberta-base-english-sentweet-derogatory \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_final_mixed_aug_insert_bert_1_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_final_mixed_aug_insert_bert_1_en.md new file mode 100644 index 00000000000000..ef82d925ff954b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_final_mixed_aug_insert_bert_1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_final_mixed_aug_insert_bert_1 XlmRoBertaForSequenceClassification from ThuyNT03 +author: John Snow Labs +name: xlm_roberta_base_final_mixed_aug_insert_bert_1 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_final_mixed_aug_insert_bert_1` is a English model originally trained by ThuyNT03. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_final_mixed_aug_insert_bert_1_en_5.5.0_3.0_1725906780277.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_final_mixed_aug_insert_bert_1_en_5.5.0_3.0_1725906780277.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_base_final_mixed_aug_insert_bert_1","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_base_final_mixed_aug_insert_bert_1", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_final_mixed_aug_insert_bert_1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|795.4 MB| + +## References + +https://huggingface.co/ThuyNT03/xlm-roberta-base-Final_Mixed-aug_insert_BERT-1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_final_mixed_aug_replace_bert_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_final_mixed_aug_replace_bert_en.md new file mode 100644 index 00000000000000..18db02ce898f16 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_final_mixed_aug_replace_bert_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_final_mixed_aug_replace_bert XlmRoBertaForSequenceClassification from ThuyNT03 +author: John Snow Labs +name: xlm_roberta_base_final_mixed_aug_replace_bert +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_final_mixed_aug_replace_bert` is a English model originally trained by ThuyNT03. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_final_mixed_aug_replace_bert_en_5.5.0_3.0_1725871447018.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_final_mixed_aug_replace_bert_en_5.5.0_3.0_1725871447018.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_base_final_mixed_aug_replace_bert","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_base_final_mixed_aug_replace_bert", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_final_mixed_aug_replace_bert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|795.6 MB| + +## References + +https://huggingface.co/ThuyNT03/xlm-roberta-base-Final_Mixed-aug_replace_BERT \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_marc_begar_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_marc_begar_en.md new file mode 100644 index 00000000000000..34bd669640321d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_marc_begar_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_marc_begar XlmRoBertaForSequenceClassification from begar +author: John Snow Labs +name: xlm_roberta_base_finetuned_marc_begar +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_marc_begar` is a English model originally trained by begar. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_marc_begar_en_5.5.0_3.0_1725907567241.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_marc_begar_en_5.5.0_3.0_1725907567241.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_base_finetuned_marc_begar","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_base_finetuned_marc_begar", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_marc_begar| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|835.1 MB| + +## References + +https://huggingface.co/begar/xlm-roberta-base-finetuned-marc \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_mjqing_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_mjqing_en.md new file mode 100644 index 00000000000000..12c2a7c105a959 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_mjqing_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_all_mjqing XlmRoBertaForTokenClassification from MJQing +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_all_mjqing +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_all_mjqing` is a English model originally trained by MJQing. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_all_mjqing_en_5.5.0_3.0_1725918853430.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_all_mjqing_en_5.5.0_3.0_1725918853430.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_all_mjqing","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_all_mjqing", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_all_mjqing| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|848.0 MB| + +## References + +https://huggingface.co/MJQing/xlm-roberta-base-finetuned-panx-all \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_mjqing_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_mjqing_pipeline_en.md new file mode 100644 index 00000000000000..250b463dd02136 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_mjqing_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_all_mjqing_pipeline pipeline XlmRoBertaForTokenClassification from MJQing +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_all_mjqing_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_all_mjqing_pipeline` is a English model originally trained by MJQing. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_all_mjqing_pipeline_en_5.5.0_3.0_1725918939009.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_all_mjqing_pipeline_en_5.5.0_3.0_1725918939009.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_all_mjqing_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_all_mjqing_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_all_mjqing_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|848.0 MB| + +## References + +https://huggingface.co/MJQing/xlm-roberta-base-finetuned-panx-all + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_seddiktrk_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_seddiktrk_pipeline_en.md new file mode 100644 index 00000000000000..9047edaa183130 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_seddiktrk_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_all_seddiktrk_pipeline pipeline XlmRoBertaForTokenClassification from seddiktrk +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_all_seddiktrk_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_all_seddiktrk_pipeline` is a English model originally trained by seddiktrk. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_all_seddiktrk_pipeline_en_5.5.0_3.0_1725894721723.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_all_seddiktrk_pipeline_en_5.5.0_3.0_1725894721723.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_all_seddiktrk_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_all_seddiktrk_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_all_seddiktrk_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|848.0 MB| + +## References + +https://huggingface.co/seddiktrk/xlm-roberta-base-finetuned-panx-all + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_smallsuper_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_smallsuper_en.md new file mode 100644 index 00000000000000..9fdd534559329b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_smallsuper_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_all_smallsuper XlmRoBertaForTokenClassification from smallsuper +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_all_smallsuper +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_all_smallsuper` is a English model originally trained by smallsuper. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_all_smallsuper_en_5.5.0_3.0_1725894482279.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_all_smallsuper_en_5.5.0_3.0_1725894482279.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_all_smallsuper","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_all_smallsuper", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_all_smallsuper| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|857.9 MB| + +## References + +https://huggingface.co/smallsuper/xlm-roberta-base-finetuned-panx-all \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_smallsuper_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_smallsuper_pipeline_en.md new file mode 100644 index 00000000000000..15d15024ee2d6d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_all_smallsuper_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_all_smallsuper_pipeline pipeline XlmRoBertaForTokenClassification from smallsuper +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_all_smallsuper_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_all_smallsuper_pipeline` is a English model originally trained by smallsuper. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_all_smallsuper_pipeline_en_5.5.0_3.0_1725894549159.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_all_smallsuper_pipeline_en_5.5.0_3.0_1725894549159.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_all_smallsuper_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_all_smallsuper_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_all_smallsuper_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|857.9 MB| + +## References + +https://huggingface.co/smallsuper/xlm-roberta-base-finetuned-panx-all + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_english_huggingbase_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_english_huggingbase_pipeline_en.md new file mode 100644 index 00000000000000..1f950b61ee6751 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_english_huggingbase_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_english_huggingbase_pipeline pipeline XlmRoBertaForTokenClassification from huggingbase +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_english_huggingbase_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_english_huggingbase_pipeline` is a English model originally trained by huggingbase. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_english_huggingbase_pipeline_en_5.5.0_3.0_1725894536720.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_english_huggingbase_pipeline_en_5.5.0_3.0_1725894536720.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_english_huggingbase_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_english_huggingbase_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_english_huggingbase_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|826.4 MB| + +## References + +https://huggingface.co/huggingbase/xlm-roberta-base-finetuned-panx-en + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_english_the_neural_networker_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_english_the_neural_networker_pipeline_en.md new file mode 100644 index 00000000000000..a29f0157c45194 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_english_the_neural_networker_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_english_the_neural_networker_pipeline pipeline XlmRoBertaForTokenClassification from the-neural-networker +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_english_the_neural_networker_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_english_the_neural_networker_pipeline` is a English model originally trained by the-neural-networker. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_english_the_neural_networker_pipeline_en_5.5.0_3.0_1725895433336.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_english_the_neural_networker_pipeline_en_5.5.0_3.0_1725895433336.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_english_the_neural_networker_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_english_the_neural_networker_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_english_the_neural_networker_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|836.4 MB| + +## References + +https://huggingface.co/the-neural-networker/xlm-roberta-base-finetuned-panx-en + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_andreaschandra_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_andreaschandra_en.md new file mode 100644 index 00000000000000..38dc9ccbd13106 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_andreaschandra_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_andreaschandra XlmRoBertaForTokenClassification from andreaschandra +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_andreaschandra +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_andreaschandra` is a English model originally trained by andreaschandra. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_andreaschandra_en_5.5.0_3.0_1725917586767.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_andreaschandra_en_5.5.0_3.0_1725917586767.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_andreaschandra","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_andreaschandra", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_andreaschandra| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|840.9 MB| + +## References + +https://huggingface.co/andreaschandra/xlm-roberta-base-finetuned-panx-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_andreaschandra_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_andreaschandra_pipeline_en.md new file mode 100644 index 00000000000000..4c85eb53caf064 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_andreaschandra_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_andreaschandra_pipeline pipeline XlmRoBertaForTokenClassification from andreaschandra +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_andreaschandra_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_andreaschandra_pipeline` is a English model originally trained by andreaschandra. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_andreaschandra_pipeline_en_5.5.0_3.0_1725917667199.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_andreaschandra_pipeline_en_5.5.0_3.0_1725917667199.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_french_andreaschandra_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_french_andreaschandra_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_andreaschandra_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|840.9 MB| + +## References + +https://huggingface.co/andreaschandra/xlm-roberta-base-finetuned-panx-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_arnaudmkonan_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_arnaudmkonan_en.md new file mode 100644 index 00000000000000..742e80acf7d936 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_arnaudmkonan_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_arnaudmkonan XlmRoBertaForTokenClassification from Arnaudmkonan +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_arnaudmkonan +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_arnaudmkonan` is a English model originally trained by Arnaudmkonan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_arnaudmkonan_en_5.5.0_3.0_1725895311913.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_arnaudmkonan_en_5.5.0_3.0_1725895311913.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_arnaudmkonan","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_arnaudmkonan", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_arnaudmkonan| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|840.9 MB| + +## References + +https://huggingface.co/Arnaudmkonan/xlm-roberta-base-finetuned-panx-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_arnaudmkonan_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_arnaudmkonan_pipeline_en.md new file mode 100644 index 00000000000000..fbb66a04afb879 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_arnaudmkonan_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_arnaudmkonan_pipeline pipeline XlmRoBertaForTokenClassification from Arnaudmkonan +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_arnaudmkonan_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_arnaudmkonan_pipeline` is a English model originally trained by Arnaudmkonan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_arnaudmkonan_pipeline_en_5.5.0_3.0_1725895390259.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_arnaudmkonan_pipeline_en_5.5.0_3.0_1725895390259.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_french_arnaudmkonan_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_french_arnaudmkonan_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_arnaudmkonan_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|840.9 MB| + +## References + +https://huggingface.co/Arnaudmkonan/xlm-roberta-base-finetuned-panx-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_bessho_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_bessho_en.md new file mode 100644 index 00000000000000..7c838c53baa854 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_bessho_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_bessho XlmRoBertaForTokenClassification from bessho +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_bessho +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_bessho` is a English model originally trained by bessho. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_bessho_en_5.5.0_3.0_1725894765277.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_bessho_en_5.5.0_3.0_1725894765277.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_bessho","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_bessho", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_bessho| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|840.9 MB| + +## References + +https://huggingface.co/bessho/xlm-roberta-base-finetuned-panx-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_bessho_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_bessho_pipeline_en.md new file mode 100644 index 00000000000000..7e95ff46e6cd46 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_bessho_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_bessho_pipeline pipeline XlmRoBertaForTokenClassification from bessho +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_bessho_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_bessho_pipeline` is a English model originally trained by bessho. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_bessho_pipeline_en_5.5.0_3.0_1725894843226.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_bessho_pipeline_en_5.5.0_3.0_1725894843226.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_french_bessho_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_french_bessho_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_bessho_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|840.9 MB| + +## References + +https://huggingface.co/bessho/xlm-roberta-base-finetuned-panx-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_hanlforever_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_hanlforever_en.md new file mode 100644 index 00000000000000..4e416bb1052091 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_hanlforever_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_hanlforever XlmRoBertaForTokenClassification from hanlforever +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_hanlforever +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_hanlforever` is a English model originally trained by hanlforever. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_hanlforever_en_5.5.0_3.0_1725895758271.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_hanlforever_en_5.5.0_3.0_1725895758271.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_hanlforever","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_hanlforever", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_hanlforever| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|840.9 MB| + +## References + +https://huggingface.co/hanlforever/xlm-roberta-base-finetuned-panx-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_k3lana_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_k3lana_en.md new file mode 100644 index 00000000000000..7fe91a6bec5159 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_k3lana_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_k3lana XlmRoBertaForTokenClassification from k3lana +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_k3lana +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_k3lana` is a English model originally trained by k3lana. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_k3lana_en_5.5.0_3.0_1725917545518.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_k3lana_en_5.5.0_3.0_1725917545518.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_k3lana","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_k3lana", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_k3lana| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|840.9 MB| + +## References + +https://huggingface.co/k3lana/xlm-roberta-base-finetuned-panx-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_k3lana_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_k3lana_pipeline_en.md new file mode 100644 index 00000000000000..35a15b6e90f803 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_k3lana_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_k3lana_pipeline pipeline XlmRoBertaForTokenClassification from k3lana +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_k3lana_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_k3lana_pipeline` is a English model originally trained by k3lana. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_k3lana_pipeline_en_5.5.0_3.0_1725917625033.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_k3lana_pipeline_en_5.5.0_3.0_1725917625033.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_french_k3lana_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_french_k3lana_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_k3lana_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|840.9 MB| + +## References + +https://huggingface.co/k3lana/xlm-roberta-base-finetuned-panx-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_leosol_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_leosol_en.md new file mode 100644 index 00000000000000..c18d982569c5c3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_french_leosol_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_leosol XlmRoBertaForTokenClassification from leosol +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_leosol +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_leosol` is a English model originally trained by leosol. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_leosol_en_5.5.0_3.0_1725918344930.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_leosol_en_5.5.0_3.0_1725918344930.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_leosol","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_leosol", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_leosol| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|827.9 MB| + +## References + +https://huggingface.co/leosol/xlm-roberta-base-finetuned-panx-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_aaa01101312_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_aaa01101312_en.md new file mode 100644 index 00000000000000..4b209424955fb8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_aaa01101312_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_aaa01101312 XlmRoBertaForTokenClassification from AAA01101312 +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_aaa01101312 +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_aaa01101312` is a English model originally trained by AAA01101312. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_aaa01101312_en_5.5.0_3.0_1725917726884.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_aaa01101312_en_5.5.0_3.0_1725917726884.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_aaa01101312","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_aaa01101312", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_aaa01101312| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|840.8 MB| + +## References + +https://huggingface.co/AAA01101312/xlm-roberta-base-finetuned-panx-de \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_ankit15nov_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_ankit15nov_en.md new file mode 100644 index 00000000000000..70b43b69df2692 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_ankit15nov_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_ankit15nov XlmRoBertaForTokenClassification from Ankit15nov +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_ankit15nov +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_ankit15nov` is a English model originally trained by Ankit15nov. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_ankit15nov_en_5.5.0_3.0_1725917653044.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_ankit15nov_en_5.5.0_3.0_1725917653044.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_ankit15nov","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_ankit15nov", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_ankit15nov| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|858.2 MB| + +## References + +https://huggingface.co/Ankit15nov/xlm-roberta-base-finetuned-panx-de-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_ankit15nov_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_ankit15nov_pipeline_en.md new file mode 100644 index 00000000000000..152353bef5ec7d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_ankit15nov_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_ankit15nov_pipeline pipeline XlmRoBertaForTokenClassification from Ankit15nov +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_ankit15nov_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_ankit15nov_pipeline` is a English model originally trained by Ankit15nov. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_ankit15nov_pipeline_en_5.5.0_3.0_1725917721661.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_ankit15nov_pipeline_en_5.5.0_3.0_1725917721661.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_ankit15nov_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_ankit15nov_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_ankit15nov_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|858.2 MB| + +## References + +https://huggingface.co/Ankit15nov/xlm-roberta-base-finetuned-panx-de-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_esperesa_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_esperesa_en.md new file mode 100644 index 00000000000000..bf275572e5cd64 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_esperesa_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_esperesa XlmRoBertaForTokenClassification from esperesa +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_esperesa +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_esperesa` is a English model originally trained by esperesa. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_esperesa_en_5.5.0_3.0_1725895175297.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_esperesa_en_5.5.0_3.0_1725895175297.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_esperesa","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_esperesa", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_esperesa| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|858.2 MB| + +## References + +https://huggingface.co/esperesa/xlm-roberta-base-finetuned-panx-de-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_esperesa_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_esperesa_pipeline_en.md new file mode 100644 index 00000000000000..cbc6be686341e1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_esperesa_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_esperesa_pipeline pipeline XlmRoBertaForTokenClassification from esperesa +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_esperesa_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_esperesa_pipeline` is a English model originally trained by esperesa. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_esperesa_pipeline_en_5.5.0_3.0_1725895242411.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_esperesa_pipeline_en_5.5.0_3.0_1725895242411.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_esperesa_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_esperesa_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_esperesa_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|858.2 MB| + +## References + +https://huggingface.co/esperesa/xlm-roberta-base-finetuned-panx-de-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_halteroxhunter_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_halteroxhunter_en.md new file mode 100644 index 00000000000000..565f49634e361d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_halteroxhunter_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_halteroxhunter XlmRoBertaForTokenClassification from HalteroXHunter +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_halteroxhunter +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_halteroxhunter` is a English model originally trained by HalteroXHunter. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_halteroxhunter_en_5.5.0_3.0_1725918932882.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_halteroxhunter_en_5.5.0_3.0_1725918932882.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_halteroxhunter","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_halteroxhunter", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_halteroxhunter| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|858.2 MB| + +## References + +https://huggingface.co/HalteroXHunter/xlm-roberta-base-finetuned-panx-de-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_inniok_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_inniok_en.md new file mode 100644 index 00000000000000..340df0c0e6c6f8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_inniok_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_inniok XlmRoBertaForTokenClassification from inniok +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_inniok +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_inniok` is a English model originally trained by inniok. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_inniok_en_5.5.0_3.0_1725923089031.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_inniok_en_5.5.0_3.0_1725923089031.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_inniok","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_inniok", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_inniok| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|843.4 MB| + +## References + +https://huggingface.co/inniok/xlm-roberta-base-finetuned-panx-de-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_sungkwangjoong_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_sungkwangjoong_en.md new file mode 100644 index 00000000000000..bb2ea469981e18 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_french_sungkwangjoong_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_sungkwangjoong XlmRoBertaForTokenClassification from sungkwangjoong +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_sungkwangjoong +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_sungkwangjoong` is a English model originally trained by sungkwangjoong. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_sungkwangjoong_en_5.5.0_3.0_1725895634663.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_sungkwangjoong_en_5.5.0_3.0_1725895634663.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_sungkwangjoong","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_sungkwangjoong", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_sungkwangjoong| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|843.4 MB| + +## References + +https://huggingface.co/sungkwangjoong/xlm-roberta-base-finetuned-panx-de-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_italian_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_italian_pipeline_en.md new file mode 100644 index 00000000000000..0973e4bc9f5b9a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_italian_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_italian_pipeline pipeline XlmRoBertaForTokenClassification from Ferro +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_italian_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_italian_pipeline` is a English model originally trained by Ferro. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_italian_pipeline_en_5.5.0_3.0_1725923168857.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_italian_pipeline_en_5.5.0_3.0_1725923168857.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_italian_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_italian_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_italian_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|856.7 MB| + +## References + +https://huggingface.co/Ferro/xlm-roberta-base-finetuned-panx-de-it + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_mikechen_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_mikechen_en.md new file mode 100644 index 00000000000000..a853148b43bc5b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_mikechen_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_mikechen XlmRoBertaForTokenClassification from mikechen +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_mikechen +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_mikechen` is a English model originally trained by mikechen. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_mikechen_en_5.5.0_3.0_1725894096271.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_mikechen_en_5.5.0_3.0_1725894096271.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_mikechen","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_mikechen", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_mikechen| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|840.8 MB| + +## References + +https://huggingface.co/mikechen/xlm-roberta-base-finetuned-panx-de \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_mikechen_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_mikechen_pipeline_en.md new file mode 100644 index 00000000000000..528131c06e98cf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_mikechen_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_mikechen_pipeline pipeline XlmRoBertaForTokenClassification from mikechen +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_mikechen_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_mikechen_pipeline` is a English model originally trained by mikechen. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_mikechen_pipeline_en_5.5.0_3.0_1725894183924.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_mikechen_pipeline_en_5.5.0_3.0_1725894183924.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_mikechen_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_mikechen_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_mikechen_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|840.8 MB| + +## References + +https://huggingface.co/mikechen/xlm-roberta-base-finetuned-panx-de + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_qihehehehe_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_qihehehehe_en.md new file mode 100644 index 00000000000000..1726ebdfdd08e0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_qihehehehe_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_qihehehehe XlmRoBertaForTokenClassification from QiHehehehe +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_qihehehehe +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_qihehehehe` is a English model originally trained by QiHehehehe. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_qihehehehe_en_5.5.0_3.0_1725918526923.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_qihehehehe_en_5.5.0_3.0_1725918526923.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_qihehehehe","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_qihehehehe", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_qihehehehe| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|840.8 MB| + +## References + +https://huggingface.co/QiHehehehe/xlm-roberta-base-finetuned-panx-de \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_qihehehehe_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_qihehehehe_pipeline_en.md new file mode 100644 index 00000000000000..a207635f1a2d2f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_qihehehehe_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_qihehehehe_pipeline pipeline XlmRoBertaForTokenClassification from QiHehehehe +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_qihehehehe_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_qihehehehe_pipeline` is a English model originally trained by QiHehehehe. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_qihehehehe_pipeline_en_5.5.0_3.0_1725918615717.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_qihehehehe_pipeline_en_5.5.0_3.0_1725918615717.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_qihehehehe_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_qihehehehe_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_qihehehehe_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|840.8 MB| + +## References + +https://huggingface.co/QiHehehehe/xlm-roberta-base-finetuned-panx-de + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_solvaysphere_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_solvaysphere_en.md new file mode 100644 index 00000000000000..a5d1c006f8c674 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_german_solvaysphere_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_solvaysphere XlmRoBertaForTokenClassification from solvaysphere +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_solvaysphere +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_solvaysphere` is a English model originally trained by solvaysphere. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_solvaysphere_en_5.5.0_3.0_1725922205636.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_solvaysphere_en_5.5.0_3.0_1725922205636.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_solvaysphere","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_solvaysphere", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_solvaysphere| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|853.8 MB| + +## References + +https://huggingface.co/solvaysphere/xlm-roberta-base-finetuned-panx-de \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_buruzaemon_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_buruzaemon_pipeline_en.md new file mode 100644 index 00000000000000..c7e147ba71c02a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_buruzaemon_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_italian_buruzaemon_pipeline pipeline XlmRoBertaForTokenClassification from buruzaemon +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_italian_buruzaemon_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_italian_buruzaemon_pipeline` is a English model originally trained by buruzaemon. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_italian_buruzaemon_pipeline_en_5.5.0_3.0_1725894966815.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_italian_buruzaemon_pipeline_en_5.5.0_3.0_1725894966815.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_italian_buruzaemon_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_italian_buruzaemon_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_italian_buruzaemon_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|816.8 MB| + +## References + +https://huggingface.co/buruzaemon/xlm-roberta-base-finetuned-panx-it + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_haesun_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_haesun_pipeline_en.md new file mode 100644 index 00000000000000..ad4b52dc2794ab --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_haesun_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_italian_haesun_pipeline pipeline XlmRoBertaForTokenClassification from haesun +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_italian_haesun_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_italian_haesun_pipeline` is a English model originally trained by haesun. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_italian_haesun_pipeline_en_5.5.0_3.0_1725922462159.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_italian_haesun_pipeline_en_5.5.0_3.0_1725922462159.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_italian_haesun_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_italian_haesun_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_italian_haesun_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|816.8 MB| + +## References + +https://huggingface.co/haesun/xlm-roberta-base-finetuned-panx-it + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_en.md new file mode 100644 index 00000000000000..6073629d2ec706 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_italian_ryo_hsgw XlmRoBertaForTokenClassification from ryo-hsgw +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_italian_ryo_hsgw +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_italian_ryo_hsgw` is a English model originally trained by ryo-hsgw. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_en_5.5.0_3.0_1725894252882.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_en_5.5.0_3.0_1725894252882.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_italian_ryo_hsgw","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_italian_ryo_hsgw", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_italian_ryo_hsgw| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|828.6 MB| + +## References + +https://huggingface.co/ryo-hsgw/xlm-roberta-base-finetuned-panx-it \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_pipeline_en.md new file mode 100644 index 00000000000000..45832f5b46a62c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_pipeline pipeline XlmRoBertaForTokenClassification from ryo-hsgw +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_pipeline` is a English model originally trained by ryo-hsgw. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_pipeline_en_5.5.0_3.0_1725894340095.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_pipeline_en_5.5.0_3.0_1725894340095.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_italian_ryo_hsgw_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|828.6 MB| + +## References + +https://huggingface.co/ryo-hsgw/xlm-roberta-base-finetuned-panx-it + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_wendao_123_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_wendao_123_pipeline_en.md new file mode 100644 index 00000000000000..50ff412762e4c9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_wendao_123_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_italian_wendao_123_pipeline pipeline XlmRoBertaForTokenClassification from Wendao-123 +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_italian_wendao_123_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_italian_wendao_123_pipeline` is a English model originally trained by Wendao-123. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_italian_wendao_123_pipeline_en_5.5.0_3.0_1725895628773.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_italian_wendao_123_pipeline_en_5.5.0_3.0_1725895628773.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_italian_wendao_123_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_italian_wendao_123_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_italian_wendao_123_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|816.8 MB| + +## References + +https://huggingface.co/Wendao-123/xlm-roberta-base-finetuned-panx-it + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_youngbreadho_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_youngbreadho_en.md new file mode 100644 index 00000000000000..34e8579bf8a06e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_italian_youngbreadho_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_italian_youngbreadho XlmRoBertaForTokenClassification from youngbreadho +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_italian_youngbreadho +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_italian_youngbreadho` is a English model originally trained by youngbreadho. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_italian_youngbreadho_en_5.5.0_3.0_1725917715497.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_italian_youngbreadho_en_5.5.0_3.0_1725917715497.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_italian_youngbreadho","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_italian_youngbreadho", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_italian_youngbreadho| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|826.5 MB| + +## References + +https://huggingface.co/youngbreadho/xlm-roberta-base-finetuned-panx-it \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_turkish_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_turkish_en.md new file mode 100644 index 00000000000000..fe309c87044245 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_turkish_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_turkish XlmRoBertaForTokenClassification from hcy5561 +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_turkish +date: 2024-09-09 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_turkish` is a English model originally trained by hcy5561. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_turkish_en_5.5.0_3.0_1725893950926.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_turkish_en_5.5.0_3.0_1725893950926.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_turkish","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_turkish", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_turkish| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|837.9 MB| + +## References + +https://huggingface.co/hcy5561/xlm-roberta-base-finetuned-panx-tr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_turkish_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_turkish_pipeline_en.md new file mode 100644 index 00000000000000..a5b0ce2931207c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_finetuned_panx_turkish_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_turkish_pipeline pipeline XlmRoBertaForTokenClassification from hcy5561 +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_turkish_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_turkish_pipeline` is a English model originally trained by hcy5561. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_turkish_pipeline_en_5.5.0_3.0_1725894035925.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_turkish_pipeline_en_5.5.0_3.0_1725894035925.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_turkish_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_turkish_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_turkish_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|837.9 MB| + +## References + +https://huggingface.co/hcy5561/xlm-roberta-base-finetuned-panx-tr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_fintuned_panx_italian_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_fintuned_panx_italian_pipeline_en.md new file mode 100644 index 00000000000000..8248fe8c8332af --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_fintuned_panx_italian_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_fintuned_panx_italian_pipeline pipeline XlmRoBertaForTokenClassification from tatsunori +author: John Snow Labs +name: xlm_roberta_base_fintuned_panx_italian_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_fintuned_panx_italian_pipeline` is a English model originally trained by tatsunori. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_fintuned_panx_italian_pipeline_en_5.5.0_3.0_1725918023274.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_fintuned_panx_italian_pipeline_en_5.5.0_3.0_1725918023274.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_fintuned_panx_italian_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_fintuned_panx_italian_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_fintuned_panx_italian_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|816.8 MB| + +## References + +https://huggingface.co/tatsunori/xlm-roberta-base-fintuned-panx-it + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_imdb_seanghay_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_imdb_seanghay_pipeline_en.md new file mode 100644 index 00000000000000..4b6989bef31b6f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_imdb_seanghay_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_imdb_seanghay_pipeline pipeline XlmRoBertaForSequenceClassification from seanghay +author: John Snow Labs +name: xlm_roberta_base_imdb_seanghay_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_imdb_seanghay_pipeline` is a English model originally trained by seanghay. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_imdb_seanghay_pipeline_en_5.5.0_3.0_1725907474298.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_imdb_seanghay_pipeline_en_5.5.0_3.0_1725907474298.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_imdb_seanghay_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_imdb_seanghay_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_imdb_seanghay_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|869.0 MB| + +## References + +https://huggingface.co/seanghay/xlm-roberta-base-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_language_detection_pipeline_xx.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_language_detection_pipeline_xx.md new file mode 100644 index 00000000000000..5108b0ffec2f7d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_language_detection_pipeline_xx.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Multilingual xlm_roberta_base_language_detection_pipeline pipeline XlmRoBertaForSequenceClassification from papluca +author: John Snow Labs +name: xlm_roberta_base_language_detection_pipeline +date: 2024-09-09 +tags: [xx, open_source, pipeline, onnx] +task: Text Classification +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_language_detection_pipeline` is a Multilingual model originally trained by papluca. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_language_detection_pipeline_xx_5.5.0_3.0_1725870572237.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_language_detection_pipeline_xx_5.5.0_3.0_1725870572237.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_language_detection_pipeline", lang = "xx") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_language_detection_pipeline", lang = "xx") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_language_detection_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|xx| +|Size:|870.4 MB| + +## References + +https://huggingface.co/papluca/xlm-roberta-base-language-detection + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_lr0_0001_seed42_esp_hau_eng_train_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_lr0_0001_seed42_esp_hau_eng_train_pipeline_en.md new file mode 100644 index 00000000000000..825f80035a372c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_lr0_0001_seed42_esp_hau_eng_train_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_lr0_0001_seed42_esp_hau_eng_train_pipeline pipeline XlmRoBertaForSequenceClassification from shanhy +author: John Snow Labs +name: xlm_roberta_base_lr0_0001_seed42_esp_hau_eng_train_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_lr0_0001_seed42_esp_hau_eng_train_pipeline` is a English model originally trained by shanhy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_lr0_0001_seed42_esp_hau_eng_train_pipeline_en_5.5.0_3.0_1725906646695.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_lr0_0001_seed42_esp_hau_eng_train_pipeline_en_5.5.0_3.0_1725906646695.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_lr0_0001_seed42_esp_hau_eng_train_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_lr0_0001_seed42_esp_hau_eng_train_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_lr0_0001_seed42_esp_hau_eng_train_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|824.3 MB| + +## References + +https://huggingface.co/shanhy/xlm-roberta-base_lr0.0001_seed42_esp-hau-eng_train + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_seed42_original_amh_esp_eng_train_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_seed42_original_amh_esp_eng_train_pipeline_en.md new file mode 100644 index 00000000000000..66f930c2b01a58 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_seed42_original_amh_esp_eng_train_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_seed42_original_amh_esp_eng_train_pipeline pipeline XlmRoBertaForSequenceClassification from shanhy +author: John Snow Labs +name: xlm_roberta_base_seed42_original_amh_esp_eng_train_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_seed42_original_amh_esp_eng_train_pipeline` is a English model originally trained by shanhy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_seed42_original_amh_esp_eng_train_pipeline_en_5.5.0_3.0_1725907313754.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_seed42_original_amh_esp_eng_train_pipeline_en_5.5.0_3.0_1725907313754.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_seed42_original_amh_esp_eng_train_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_seed42_original_amh_esp_eng_train_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_seed42_original_amh_esp_eng_train_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|803.8 MB| + +## References + +https://huggingface.co/shanhy/xlm-roberta-base_seed42_original_amh-esp-eng_train + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_tweet_sentiment_arabic_trimmed_arabic_30000_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_tweet_sentiment_arabic_trimmed_arabic_30000_en.md new file mode 100644 index 00000000000000..e65ea1d566243d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_base_tweet_sentiment_arabic_trimmed_arabic_30000_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_tweet_sentiment_arabic_trimmed_arabic_30000 XlmRoBertaForSequenceClassification from vocabtrimmer +author: John Snow Labs +name: xlm_roberta_base_tweet_sentiment_arabic_trimmed_arabic_30000 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_tweet_sentiment_arabic_trimmed_arabic_30000` is a English model originally trained by vocabtrimmer. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_tweet_sentiment_arabic_trimmed_arabic_30000_en_5.5.0_3.0_1725907568357.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_tweet_sentiment_arabic_trimmed_arabic_30000_en_5.5.0_3.0_1725907568357.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_base_tweet_sentiment_arabic_trimmed_arabic_30000","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_base_tweet_sentiment_arabic_trimmed_arabic_30000", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_tweet_sentiment_arabic_trimmed_arabic_30000| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|387.4 MB| + +## References + +https://huggingface.co/vocabtrimmer/xlm-roberta-base-tweet-sentiment-ar-trimmed-ar-30000 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_job_skill_reranker_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_job_skill_reranker_en.md new file mode 100644 index 00000000000000..f5213a970b07e6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_job_skill_reranker_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_job_skill_reranker XlmRoBertaForSequenceClassification from serbog +author: John Snow Labs +name: xlm_roberta_job_skill_reranker +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_job_skill_reranker` is a English model originally trained by serbog. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_job_skill_reranker_en_5.5.0_3.0_1725907326656.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_job_skill_reranker_en_5.5.0_3.0_1725907326656.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_job_skill_reranker","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlm_roberta_job_skill_reranker", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_job_skill_reranker| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|850.5 MB| + +## References + +https://huggingface.co/serbog/xlm-roberta-job-skill-reranker \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_job_skill_reranker_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_job_skill_reranker_pipeline_en.md new file mode 100644 index 00000000000000..47d34b69447018 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_job_skill_reranker_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_job_skill_reranker_pipeline pipeline XlmRoBertaForSequenceClassification from serbog +author: John Snow Labs +name: xlm_roberta_job_skill_reranker_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_job_skill_reranker_pipeline` is a English model originally trained by serbog. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_job_skill_reranker_pipeline_en_5.5.0_3.0_1725907419488.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_job_skill_reranker_pipeline_en_5.5.0_3.0_1725907419488.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_job_skill_reranker_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_job_skill_reranker_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_job_skill_reranker_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|850.5 MB| + +## References + +https://huggingface.co/serbog/xlm-roberta-job-skill-reranker + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_ner_japanese_tsmatz_ja.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_ner_japanese_tsmatz_ja.md new file mode 100644 index 00000000000000..015459a18edef4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_ner_japanese_tsmatz_ja.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Japanese xlm_roberta_ner_japanese_tsmatz XlmRoBertaForTokenClassification from tsmatz +author: John Snow Labs +name: xlm_roberta_ner_japanese_tsmatz +date: 2024-09-09 +tags: [ja, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: ja +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_ner_japanese_tsmatz` is a Japanese model originally trained by tsmatz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_ner_japanese_tsmatz_ja_5.5.0_3.0_1725918228768.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_ner_japanese_tsmatz_ja_5.5.0_3.0_1725918228768.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_ner_japanese_tsmatz","ja") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_ner_japanese_tsmatz", "ja") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_ner_japanese_tsmatz| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|ja| +|Size:|783.2 MB| + +## References + +https://huggingface.co/tsmatz/xlm-roberta-ner-japanese \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_ner_japanese_tsmatz_pipeline_ja.md b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_ner_japanese_tsmatz_pipeline_ja.md new file mode 100644 index 00000000000000..882a4b28b96218 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlm_roberta_ner_japanese_tsmatz_pipeline_ja.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Japanese xlm_roberta_ner_japanese_tsmatz_pipeline pipeline XlmRoBertaForTokenClassification from tsmatz +author: John Snow Labs +name: xlm_roberta_ner_japanese_tsmatz_pipeline +date: 2024-09-09 +tags: [ja, open_source, pipeline, onnx] +task: Named Entity Recognition +language: ja +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_ner_japanese_tsmatz_pipeline` is a Japanese model originally trained by tsmatz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_ner_japanese_tsmatz_pipeline_ja_5.5.0_3.0_1725918370508.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_ner_japanese_tsmatz_pipeline_ja_5.5.0_3.0_1725918370508.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_ner_japanese_tsmatz_pipeline", lang = "ja") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_ner_japanese_tsmatz_pipeline", lang = "ja") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_ner_japanese_tsmatz_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|ja| +|Size:|783.2 MB| + +## References + +https://huggingface.co/tsmatz/xlm-roberta-ner-japanese + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlmr_sinhalese_english_train_shuffled_1986_test2000_en.md b/docs/_posts/ahmedlone127/2024-09-09-xlmr_sinhalese_english_train_shuffled_1986_test2000_en.md new file mode 100644 index 00000000000000..9a9dd214679adc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlmr_sinhalese_english_train_shuffled_1986_test2000_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlmr_sinhalese_english_train_shuffled_1986_test2000 XlmRoBertaForSequenceClassification from patpizio +author: John Snow Labs +name: xlmr_sinhalese_english_train_shuffled_1986_test2000 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, xlm_roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlmr_sinhalese_english_train_shuffled_1986_test2000` is a English model originally trained by patpizio. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlmr_sinhalese_english_train_shuffled_1986_test2000_en_5.5.0_3.0_1725906629419.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlmr_sinhalese_english_train_shuffled_1986_test2000_en_5.5.0_3.0_1725906629419.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlmr_sinhalese_english_train_shuffled_1986_test2000","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained("xlmr_sinhalese_english_train_shuffled_1986_test2000", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlmr_sinhalese_english_train_shuffled_1986_test2000| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|814.2 MB| + +## References + +https://huggingface.co/patpizio/xlmr-si-en-train_shuffled-1986-test2000 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_base_finetuned_hausa_finetuned_ner_swahili_sw.md b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_base_finetuned_hausa_finetuned_ner_swahili_sw.md new file mode 100644 index 00000000000000..1b8fb9c4bb68dc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_base_finetuned_hausa_finetuned_ner_swahili_sw.md @@ -0,0 +1,115 @@ +--- +layout: model +title: Swahili XLMRobertaForTokenClassification Base Cased model (from mbeukman) +author: John Snow Labs +name: xlmroberta_ner_base_finetuned_hausa_finetuned_ner_swahili +date: 2024-09-09 +tags: [sw, open_source, xlm_roberta, ner, onnx] +task: Named Entity Recognition +language: sw +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XLMRobertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili` is a Swahili model originally trained by `mbeukman`. + +## Predicted Entities + +`PER`, `LOC`, `ORG`, `DATE` + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_base_finetuned_hausa_finetuned_ner_swahili_sw_5.5.0_3.0_1725894102891.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_base_finetuned_hausa_finetuned_ner_swahili_sw_5.5.0_3.0_1725894102891.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +token_classifier = XlmRoBertaForTokenClassification.pretrained("xlmroberta_ner_base_finetuned_hausa_finetuned_ner_swahili","sw") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("ner") + +ner_converter = NerConverter()\ + .setInputCols(["document", "token", "ner"])\ + .setOutputCol("ner_chunk") + +pipeline = Pipeline(stages=[documentAssembler, tokenizer, token_classifier, ner_converter]) + +data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new DocumentAssembler() + .setInputCols(Array("text")) + .setOutputCols(Array("document")) + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val token_classifier = XlmRoBertaForTokenClassification.pretrained("xlmroberta_ner_base_finetuned_hausa_finetuned_ner_swahili","sw") + .setInputCols(Array("document", "token")) + .setOutputCol("ner") + +val ner_converter = new NerConverter() + .setInputCols(Array("document", "token', "ner")) + .setOutputCol("ner_chunk") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, token_classifier, ner_converter)) + +val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("sw.ner.xlmr_roberta.base_finetuned_hausa.by_mbeukman").predict("""PUT YOUR STRING HERE""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlmroberta_ner_base_finetuned_hausa_finetuned_ner_swahili| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|sw| +|Size:|1.0 GB| + +## References + +References + +- https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-hausa-finetuned-ner-swahili +- https://arxiv.org/abs/2103.11811 +- https://github.com/Michael-Beukman/NERTransfer +- https://github.com/masakhane-io/masakhane-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_flood_base_finetuned_panx_all_pipeline_xx.md b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_flood_base_finetuned_panx_all_pipeline_xx.md new file mode 100644 index 00000000000000..27302666cb7d10 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_flood_base_finetuned_panx_all_pipeline_xx.md @@ -0,0 +1,70 @@ +--- +layout: model +title: Multilingual xlmroberta_ner_flood_base_finetuned_panx_all_pipeline pipeline XlmRoBertaForTokenClassification from flood +author: John Snow Labs +name: xlmroberta_ner_flood_base_finetuned_panx_all_pipeline +date: 2024-09-09 +tags: [xx, open_source, pipeline, onnx] +task: Named Entity Recognition +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlmroberta_ner_flood_base_finetuned_panx_all_pipeline` is a Multilingual model originally trained by flood. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_flood_base_finetuned_panx_all_pipeline_xx_5.5.0_3.0_1725895762322.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_flood_base_finetuned_panx_all_pipeline_xx_5.5.0_3.0_1725895762322.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlmroberta_ner_flood_base_finetuned_panx_all_pipeline", lang = "xx") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlmroberta_ner_flood_base_finetuned_panx_all_pipeline", lang = "xx") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlmroberta_ner_flood_base_finetuned_panx_all_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|xx| +|Size:|861.0 MB| + +## References + +https://huggingface.co/flood/xlm-roberta-base-finetuned-panx-all + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_flood_base_finetuned_panx_all_xx.md b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_flood_base_finetuned_panx_all_xx.md new file mode 100644 index 00000000000000..8a46020f6cadf3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_flood_base_finetuned_panx_all_xx.md @@ -0,0 +1,112 @@ +--- +layout: model +title: Multilingual XLMRobertaForTokenClassification Base Cased model (from flood) +author: John Snow Labs +name: xlmroberta_ner_flood_base_finetuned_panx_all +date: 2024-09-09 +tags: [xx, open_source, xlm_roberta, ner, onnx] +task: Named Entity Recognition +language: xx +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XLMRobertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `xlm-roberta-base-finetuned-panx-all` is a Multilingual model originally trained by `flood`. + +## Predicted Entities + +`ORG`, `LOC`, `PER` + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_flood_base_finetuned_panx_all_xx_5.5.0_3.0_1725895699205.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_flood_base_finetuned_panx_all_xx_5.5.0_3.0_1725895699205.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +token_classifier = XlmRoBertaForTokenClassification.pretrained("xlmroberta_ner_flood_base_finetuned_panx_all","xx") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("ner") + +ner_converter = NerConverter()\ + .setInputCols(["document", "token", "ner"])\ + .setOutputCol("ner_chunk") + +pipeline = Pipeline(stages=[documentAssembler, tokenizer, token_classifier, ner_converter]) + +data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new DocumentAssembler() + .setInputCols(Array("text")) + .setOutputCols(Array("document")) + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val token_classifier = XlmRoBertaForTokenClassification.pretrained("xlmroberta_ner_flood_base_finetuned_panx_all","xx") + .setInputCols(Array("document", "token")) + .setOutputCol("ner") + +val ner_converter = new NerConverter() + .setInputCols(Array("document", "token', "ner")) + .setOutputCol("ner_chunk") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, token_classifier, ner_converter)) + +val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("xx.ner.xlmr_roberta.base_finetuned_panx_all.by_flood").predict("""PUT YOUR STRING HERE""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlmroberta_ner_flood_base_finetuned_panx_all| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|xx| +|Size:|861.0 MB| + +## References + +References + +- https://huggingface.co/flood/xlm-roberta-base-finetuned-panx-all \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_manqingliu_base_finetuned_panx_de.md b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_manqingliu_base_finetuned_panx_de.md new file mode 100644 index 00000000000000..0b8f6d138e6ec9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_manqingliu_base_finetuned_panx_de.md @@ -0,0 +1,113 @@ +--- +layout: model +title: German XLMRobertaForTokenClassification Base Cased model (from ManqingLiu) +author: John Snow Labs +name: xlmroberta_ner_manqingliu_base_finetuned_panx +date: 2024-09-09 +tags: [de, open_source, xlm_roberta, ner, onnx] +task: Named Entity Recognition +language: de +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XLMRobertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `xlm-roberta-base-finetuned-panx-de` is a German model originally trained by `ManqingLiu`. + +## Predicted Entities + +`PER`, `LOC`, `ORG` + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_manqingliu_base_finetuned_panx_de_5.5.0_3.0_1725919453210.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_manqingliu_base_finetuned_panx_de_5.5.0_3.0_1725919453210.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +token_classifier = XlmRoBertaForTokenClassification.pretrained("xlmroberta_ner_manqingliu_base_finetuned_panx","de") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("ner") + +ner_converter = NerConverter()\ + .setInputCols(["document", "token", "ner"])\ + .setOutputCol("ner_chunk") + +pipeline = Pipeline(stages=[documentAssembler, tokenizer, token_classifier, ner_converter]) + +data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new DocumentAssembler() + .setInputCols(Array("text")) + .setOutputCols(Array("document")) + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val token_classifier = XlmRoBertaForTokenClassification.pretrained("xlmroberta_ner_manqingliu_base_finetuned_panx","de") + .setInputCols(Array("document", "token")) + .setOutputCol("ner") + +val ner_converter = new NerConverter() + .setInputCols(Array("document", "token', "ner")) + .setOutputCol("ner_chunk") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, token_classifier, ner_converter)) + +val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("de.ner.xlmr_roberta.xtreme.base_finetuned.by_ManqingLiu").predict("""PUT YOUR STRING HERE""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlmroberta_ner_manqingliu_base_finetuned_panx| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|de| +|Size:|853.8 MB| + +## References + +References + +- https://huggingface.co/ManqingLiu/xlm-roberta-base-finetuned-panx-de +- https://paperswithcode.com/sota?task=Token+Classification&dataset=xtreme \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_skr3178_base_finetuned_panx_fr.md b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_skr3178_base_finetuned_panx_fr.md new file mode 100644 index 00000000000000..b059a56458dd6c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_skr3178_base_finetuned_panx_fr.md @@ -0,0 +1,113 @@ +--- +layout: model +title: French XLMRobertaForTokenClassification Base Cased model (from skr3178) +author: John Snow Labs +name: xlmroberta_ner_skr3178_base_finetuned_panx +date: 2024-09-09 +tags: [fr, open_source, xlm_roberta, ner, onnx] +task: Named Entity Recognition +language: fr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XLMRobertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `xlm-roberta-base-finetuned-panx-fr` is a French model originally trained by `skr3178`. + +## Predicted Entities + +`PER`, `LOC`, `ORG` + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_skr3178_base_finetuned_panx_fr_5.5.0_3.0_1725918207564.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_skr3178_base_finetuned_panx_fr_5.5.0_3.0_1725918207564.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +token_classifier = XlmRoBertaForTokenClassification.pretrained("xlmroberta_ner_skr3178_base_finetuned_panx","fr") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("ner") + +ner_converter = NerConverter()\ + .setInputCols(["document", "token", "ner"])\ + .setOutputCol("ner_chunk") + +pipeline = Pipeline(stages=[documentAssembler, tokenizer, token_classifier, ner_converter]) + +data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new DocumentAssembler() + .setInputCols(Array("text")) + .setOutputCols(Array("document")) + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val token_classifier = XlmRoBertaForTokenClassification.pretrained("xlmroberta_ner_skr3178_base_finetuned_panx","fr") + .setInputCols(Array("document", "token")) + .setOutputCol("ner") + +val ner_converter = new NerConverter() + .setInputCols(Array("document", "token', "ner")) + .setOutputCol("ner_chunk") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, token_classifier, ner_converter)) + +val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("fr.ner.xlmr_roberta.xtreme.base_finetuned.by_skr3178").predict("""PUT YOUR STRING HERE""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlmroberta_ner_skr3178_base_finetuned_panx| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|fr| +|Size:|840.9 MB| + +## References + +References + +- https://huggingface.co/skr3178/xlm-roberta-base-finetuned-panx-fr +- https://paperswithcode.com/sota?task=Token+Classification&dataset=xtreme \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_skr3178_base_finetuned_panx_pipeline_fr.md b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_skr3178_base_finetuned_panx_pipeline_fr.md new file mode 100644 index 00000000000000..fb3246869247ae --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_skr3178_base_finetuned_panx_pipeline_fr.md @@ -0,0 +1,70 @@ +--- +layout: model +title: French xlmroberta_ner_skr3178_base_finetuned_panx_pipeline pipeline XlmRoBertaForTokenClassification from skr3178 +author: John Snow Labs +name: xlmroberta_ner_skr3178_base_finetuned_panx_pipeline +date: 2024-09-09 +tags: [fr, open_source, pipeline, onnx] +task: Named Entity Recognition +language: fr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlmroberta_ner_skr3178_base_finetuned_panx_pipeline` is a French model originally trained by skr3178. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_skr3178_base_finetuned_panx_pipeline_fr_5.5.0_3.0_1725918287028.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_skr3178_base_finetuned_panx_pipeline_fr_5.5.0_3.0_1725918287028.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlmroberta_ner_skr3178_base_finetuned_panx_pipeline", lang = "fr") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlmroberta_ner_skr3178_base_finetuned_panx_pipeline", lang = "fr") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlmroberta_ner_skr3178_base_finetuned_panx_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|fr| +|Size:|840.9 MB| + +## References + +https://huggingface.co/skr3178/xlm-roberta-base-finetuned-panx-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_yomexa_base_finetuned_panx_de.md b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_yomexa_base_finetuned_panx_de.md new file mode 100644 index 00000000000000..3a0442c543ec0b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_yomexa_base_finetuned_panx_de.md @@ -0,0 +1,113 @@ +--- +layout: model +title: German XLMRobertaForTokenClassification Base Cased model (from yomexa) +author: John Snow Labs +name: xlmroberta_ner_yomexa_base_finetuned_panx +date: 2024-09-09 +tags: [de, open_source, xlm_roberta, ner, onnx] +task: Named Entity Recognition +language: de +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XLMRobertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `xlm-roberta-base-finetuned-panx-de` is a German model originally trained by `yomexa`. + +## Predicted Entities + +`PER`, `LOC`, `ORG` + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_yomexa_base_finetuned_panx_de_5.5.0_3.0_1725894478129.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_yomexa_base_finetuned_panx_de_5.5.0_3.0_1725894478129.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +token_classifier = XlmRoBertaForTokenClassification.pretrained("xlmroberta_ner_yomexa_base_finetuned_panx","de") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("ner") + +ner_converter = NerConverter()\ + .setInputCols(["document", "token", "ner"])\ + .setOutputCol("ner_chunk") + +pipeline = Pipeline(stages=[documentAssembler, tokenizer, token_classifier, ner_converter]) + +data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new DocumentAssembler() + .setInputCols(Array("text")) + .setOutputCols(Array("document")) + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val token_classifier = XlmRoBertaForTokenClassification.pretrained("xlmroberta_ner_yomexa_base_finetuned_panx","de") + .setInputCols(Array("document", "token")) + .setOutputCol("ner") + +val ner_converter = new NerConverter() + .setInputCols(Array("document", "token', "ner")) + .setOutputCol("ner_chunk") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, token_classifier, ner_converter)) + +val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("de.ner.xlmr_roberta.xtreme.base_finetuned.by_yomexa").predict("""PUT YOUR STRING HERE""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlmroberta_ner_yomexa_base_finetuned_panx| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|de| +|Size:|853.8 MB| + +## References + +References + +- https://huggingface.co/yomexa/xlm-roberta-base-finetuned-panx-de +- https://paperswithcode.com/sota?task=Token+Classification&dataset=xtreme \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_yomexa_base_finetuned_panx_pipeline_de.md b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_yomexa_base_finetuned_panx_pipeline_de.md new file mode 100644 index 00000000000000..c7540d2566ee5f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-xlmroberta_ner_yomexa_base_finetuned_panx_pipeline_de.md @@ -0,0 +1,70 @@ +--- +layout: model +title: German xlmroberta_ner_yomexa_base_finetuned_panx_pipeline pipeline XlmRoBertaForTokenClassification from yomexa +author: John Snow Labs +name: xlmroberta_ner_yomexa_base_finetuned_panx_pipeline +date: 2024-09-09 +tags: [de, open_source, pipeline, onnx] +task: Named Entity Recognition +language: de +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlmroberta_ner_yomexa_base_finetuned_panx_pipeline` is a German model originally trained by yomexa. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_yomexa_base_finetuned_panx_pipeline_de_5.5.0_3.0_1725894544396.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlmroberta_ner_yomexa_base_finetuned_panx_pipeline_de_5.5.0_3.0_1725894544396.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlmroberta_ner_yomexa_base_finetuned_panx_pipeline", lang = "de") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlmroberta_ner_yomexa_base_finetuned_panx_pipeline", lang = "de") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlmroberta_ner_yomexa_base_finetuned_panx_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|de| +|Size:|853.8 MB| + +## References + +https://huggingface.co/yomexa/xlm-roberta-base-finetuned-panx-de + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-yelp_polarity_roberta_base_seed_1_en.md b/docs/_posts/ahmedlone127/2024-09-09-yelp_polarity_roberta_base_seed_1_en.md new file mode 100644 index 00000000000000..c566bbc4c49c95 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-yelp_polarity_roberta_base_seed_1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English yelp_polarity_roberta_base_seed_1 RoBertaForSequenceClassification from utahnlp +author: John Snow Labs +name: yelp_polarity_roberta_base_seed_1 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`yelp_polarity_roberta_base_seed_1` is a English model originally trained by utahnlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/yelp_polarity_roberta_base_seed_1_en_5.5.0_3.0_1725903985418.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/yelp_polarity_roberta_base_seed_1_en_5.5.0_3.0_1725903985418.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("yelp_polarity_roberta_base_seed_1","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("yelp_polarity_roberta_base_seed_1", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|yelp_polarity_roberta_base_seed_1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|465.8 MB| + +## References + +https://huggingface.co/utahnlp/yelp_polarity_roberta-base_seed-1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-yelp_polarity_roberta_base_seed_1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-yelp_polarity_roberta_base_seed_1_pipeline_en.md new file mode 100644 index 00000000000000..f84c3b12939767 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-yelp_polarity_roberta_base_seed_1_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English yelp_polarity_roberta_base_seed_1_pipeline pipeline RoBertaForSequenceClassification from utahnlp +author: John Snow Labs +name: yelp_polarity_roberta_base_seed_1_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`yelp_polarity_roberta_base_seed_1_pipeline` is a English model originally trained by utahnlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/yelp_polarity_roberta_base_seed_1_pipeline_en_5.5.0_3.0_1725904009837.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/yelp_polarity_roberta_base_seed_1_pipeline_en_5.5.0_3.0_1725904009837.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("yelp_polarity_roberta_base_seed_1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("yelp_polarity_roberta_base_seed_1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|yelp_polarity_roberta_base_seed_1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|465.9 MB| + +## References + +https://huggingface.co/utahnlp/yelp_polarity_roberta-base_seed-1 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-yelp_polarity_roberta_large_seed_3_en.md b/docs/_posts/ahmedlone127/2024-09-09-yelp_polarity_roberta_large_seed_3_en.md new file mode 100644 index 00000000000000..76ad98d86d39b6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-yelp_polarity_roberta_large_seed_3_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English yelp_polarity_roberta_large_seed_3 RoBertaForSequenceClassification from utahnlp +author: John Snow Labs +name: yelp_polarity_roberta_large_seed_3 +date: 2024-09-09 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`yelp_polarity_roberta_large_seed_3` is a English model originally trained by utahnlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/yelp_polarity_roberta_large_seed_3_en_5.5.0_3.0_1725904288892.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/yelp_polarity_roberta_large_seed_3_en_5.5.0_3.0_1725904288892.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("yelp_polarity_roberta_large_seed_3","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("yelp_polarity_roberta_large_seed_3", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|yelp_polarity_roberta_large_seed_3| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/utahnlp/yelp_polarity_roberta-large_seed-3 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-09-yelp_polarity_roberta_large_seed_3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-09-yelp_polarity_roberta_large_seed_3_pipeline_en.md new file mode 100644 index 00000000000000..c8b23f7b7d32f0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-09-yelp_polarity_roberta_large_seed_3_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English yelp_polarity_roberta_large_seed_3_pipeline pipeline RoBertaForSequenceClassification from utahnlp +author: John Snow Labs +name: yelp_polarity_roberta_large_seed_3_pipeline +date: 2024-09-09 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`yelp_polarity_roberta_large_seed_3_pipeline` is a English model originally trained by utahnlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/yelp_polarity_roberta_large_seed_3_pipeline_en_5.5.0_3.0_1725904354538.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/yelp_polarity_roberta_large_seed_3_pipeline_en_5.5.0_3.0_1725904354538.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("yelp_polarity_roberta_large_seed_3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("yelp_polarity_roberta_large_seed_3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|yelp_polarity_roberta_large_seed_3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/utahnlp/yelp_polarity_roberta-large_seed-3 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-2020_q1_90p_filtered_random_en.md b/docs/_posts/ahmedlone127/2024-09-10-2020_q1_90p_filtered_random_en.md new file mode 100644 index 00000000000000..6607eea8f6cd3a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-2020_q1_90p_filtered_random_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English 2020_q1_90p_filtered_random RoBertaEmbeddings from DouglasPontes +author: John Snow Labs +name: 2020_q1_90p_filtered_random +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`2020_q1_90p_filtered_random` is a English model originally trained by DouglasPontes. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/2020_q1_90p_filtered_random_en_5.5.0_3.0_1725930530162.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/2020_q1_90p_filtered_random_en_5.5.0_3.0_1725930530162.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("2020_q1_90p_filtered_random","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("2020_q1_90p_filtered_random","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|2020_q1_90p_filtered_random| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|466.1 MB| + +## References + +https://huggingface.co/DouglasPontes/2020-Q1-90p-filtered-random \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-2020_q2_90p_filtered_random_en.md b/docs/_posts/ahmedlone127/2024-09-10-2020_q2_90p_filtered_random_en.md new file mode 100644 index 00000000000000..f40ebd796ba1af --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-2020_q2_90p_filtered_random_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English 2020_q2_90p_filtered_random RoBertaEmbeddings from DouglasPontes +author: John Snow Labs +name: 2020_q2_90p_filtered_random +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`2020_q2_90p_filtered_random` is a English model originally trained by DouglasPontes. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/2020_q2_90p_filtered_random_en_5.5.0_3.0_1725931497924.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/2020_q2_90p_filtered_random_en_5.5.0_3.0_1725931497924.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("2020_q2_90p_filtered_random","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("2020_q2_90p_filtered_random","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|2020_q2_90p_filtered_random| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|466.1 MB| + +## References + +https://huggingface.co/DouglasPontes/2020-Q2-90p-filtered-random \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-2020_q3_50p_filtered_random_en.md b/docs/_posts/ahmedlone127/2024-09-10-2020_q3_50p_filtered_random_en.md new file mode 100644 index 00000000000000..e411cd433a4fe1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-2020_q3_50p_filtered_random_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English 2020_q3_50p_filtered_random RoBertaEmbeddings from DouglasPontes +author: John Snow Labs +name: 2020_q3_50p_filtered_random +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`2020_q3_50p_filtered_random` is a English model originally trained by DouglasPontes. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/2020_q3_50p_filtered_random_en_5.5.0_3.0_1725930377876.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/2020_q3_50p_filtered_random_en_5.5.0_3.0_1725930377876.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("2020_q3_50p_filtered_random","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("2020_q3_50p_filtered_random","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|2020_q3_50p_filtered_random| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|466.1 MB| + +## References + +https://huggingface.co/DouglasPontes/2020-Q3-50p-filtered-random \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-470_workshop_qa_en.md b/docs/_posts/ahmedlone127/2024-09-10-470_workshop_qa_en.md new file mode 100644 index 00000000000000..40773c713f4afa --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-470_workshop_qa_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English 470_workshop_qa DistilBertForQuestionAnswering from eamonrw +author: John Snow Labs +name: 470_workshop_qa +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`470_workshop_qa` is a English model originally trained by eamonrw. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/470_workshop_qa_en_5.5.0_3.0_1725931988335.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/470_workshop_qa_en_5.5.0_3.0_1725931988335.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("470_workshop_qa","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("470_workshop_qa", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|470_workshop_qa| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/eamonrw/470_workshop_qa \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-470_workshop_qa_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-470_workshop_qa_pipeline_en.md new file mode 100644 index 00000000000000..34e4f02c4b5c9b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-470_workshop_qa_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English 470_workshop_qa_pipeline pipeline DistilBertForQuestionAnswering from eamonrw +author: John Snow Labs +name: 470_workshop_qa_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`470_workshop_qa_pipeline` is a English model originally trained by eamonrw. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/470_workshop_qa_pipeline_en_5.5.0_3.0_1725931999921.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/470_workshop_qa_pipeline_en_5.5.0_3.0_1725931999921.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("470_workshop_qa_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("470_workshop_qa_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|470_workshop_qa_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/eamonrw/470_workshop_qa + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-akan_ak.md b/docs/_posts/ahmedlone127/2024-09-10-akan_ak.md new file mode 100644 index 00000000000000..ed640faf89d0b5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-akan_ak.md @@ -0,0 +1,84 @@ +--- +layout: model +title: Akan akan WhisperForCTC from devkyle +author: John Snow Labs +name: akan +date: 2024-09-10 +tags: [ak, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: ak +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`akan` is a Akan model originally trained by devkyle. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/akan_ak_5.5.0_3.0_1725944267929.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/akan_ak_5.5.0_3.0_1725944267929.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("akan","ak") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("akan", "ak") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|akan| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|ak| +|Size:|390.1 MB| + +## References + +https://huggingface.co/devkyle/Akan \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-all_mpnet_128_20_mnsr_base_en.md b/docs/_posts/ahmedlone127/2024-09-10-all_mpnet_128_20_mnsr_base_en.md new file mode 100644 index 00000000000000..cf7a4f7fa48214 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-all_mpnet_128_20_mnsr_base_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English all_mpnet_128_20_mnsr_base MPNetEmbeddings from ronanki +author: John Snow Labs +name: all_mpnet_128_20_mnsr_base +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_128_20_mnsr_base` is a English model originally trained by ronanki. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_128_20_mnsr_base_en_5.5.0_3.0_1725964123002.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_128_20_mnsr_base_en_5.5.0_3.0_1725964123002.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("all_mpnet_128_20_mnsr_base","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("all_mpnet_128_20_mnsr_base","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_128_20_mnsr_base| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/ronanki/all_mpnet_128_20_MNSR_base \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-all_mpnet_base_newtriplets_v2_lr_2e_7_m_1_e_3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-all_mpnet_base_newtriplets_v2_lr_2e_7_m_1_e_3_pipeline_en.md new file mode 100644 index 00000000000000..4b3bd5a41b9f5f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-all_mpnet_base_newtriplets_v2_lr_2e_7_m_1_e_3_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English all_mpnet_base_newtriplets_v2_lr_2e_7_m_1_e_3_pipeline pipeline MPNetEmbeddings from luiz-and-robert-thesis +author: John Snow Labs +name: all_mpnet_base_newtriplets_v2_lr_2e_7_m_1_e_3_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_base_newtriplets_v2_lr_2e_7_m_1_e_3_pipeline` is a English model originally trained by luiz-and-robert-thesis. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_base_newtriplets_v2_lr_2e_7_m_1_e_3_pipeline_en_5.5.0_3.0_1725935922597.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_base_newtriplets_v2_lr_2e_7_m_1_e_3_pipeline_en_5.5.0_3.0_1725935922597.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("all_mpnet_base_newtriplets_v2_lr_2e_7_m_1_e_3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("all_mpnet_base_newtriplets_v2_lr_2e_7_m_1_e_3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_base_newtriplets_v2_lr_2e_7_m_1_e_3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-2e-7-m-1-e-3 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-all_mpnet_base_v2_fine_tuned_epochs_1_binhcode25_en.md b/docs/_posts/ahmedlone127/2024-09-10-all_mpnet_base_v2_fine_tuned_epochs_1_binhcode25_en.md new file mode 100644 index 00000000000000..12b1b38f75c4b7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-all_mpnet_base_v2_fine_tuned_epochs_1_binhcode25_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English all_mpnet_base_v2_fine_tuned_epochs_1_binhcode25 MPNetEmbeddings from binhcode25 +author: John Snow Labs +name: all_mpnet_base_v2_fine_tuned_epochs_1_binhcode25 +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_mpnet_base_v2_fine_tuned_epochs_1_binhcode25` is a English model originally trained by binhcode25. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_fine_tuned_epochs_1_binhcode25_en_5.5.0_3.0_1725963273678.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_mpnet_base_v2_fine_tuned_epochs_1_binhcode25_en_5.5.0_3.0_1725963273678.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("all_mpnet_base_v2_fine_tuned_epochs_1_binhcode25","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("all_mpnet_base_v2_fine_tuned_epochs_1_binhcode25","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_mpnet_base_v2_fine_tuned_epochs_1_binhcode25| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/binhcode25/all-mpnet-base-v2-fine-tuned-epochs-1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-all_roberta_large_v1_meta_6_16_5_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-all_roberta_large_v1_meta_6_16_5_pipeline_en.md new file mode 100644 index 00000000000000..f4f22742945462 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-all_roberta_large_v1_meta_6_16_5_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English all_roberta_large_v1_meta_6_16_5_pipeline pipeline RoBertaForSequenceClassification from fathyshalab +author: John Snow Labs +name: all_roberta_large_v1_meta_6_16_5_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`all_roberta_large_v1_meta_6_16_5_pipeline` is a English model originally trained by fathyshalab. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/all_roberta_large_v1_meta_6_16_5_pipeline_en_5.5.0_3.0_1725964884937.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/all_roberta_large_v1_meta_6_16_5_pipeline_en_5.5.0_3.0_1725964884937.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("all_roberta_large_v1_meta_6_16_5_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("all_roberta_large_v1_meta_6_16_5_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|all_roberta_large_v1_meta_6_16_5_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/fathyshalab/all-roberta-large-v1-meta-6-16-5 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-asr_nepal_bhasa_yoruba_dummy_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-asr_nepal_bhasa_yoruba_dummy_pipeline_en.md new file mode 100644 index 00000000000000..6a888f1e38001d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-asr_nepal_bhasa_yoruba_dummy_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English asr_nepal_bhasa_yoruba_dummy_pipeline pipeline WhisperForCTC from babs +author: John Snow Labs +name: asr_nepal_bhasa_yoruba_dummy_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`asr_nepal_bhasa_yoruba_dummy_pipeline` is a English model originally trained by babs. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/asr_nepal_bhasa_yoruba_dummy_pipeline_en_5.5.0_3.0_1725940804865.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/asr_nepal_bhasa_yoruba_dummy_pipeline_en_5.5.0_3.0_1725940804865.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("asr_nepal_bhasa_yoruba_dummy_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("asr_nepal_bhasa_yoruba_dummy_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|asr_nepal_bhasa_yoruba_dummy_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|863.9 MB| + +## References + +https://huggingface.co/babs/ASR-new-yo-dummy + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-autonlp_predict_roi_1_29797730_en.md b/docs/_posts/ahmedlone127/2024-09-10-autonlp_predict_roi_1_29797730_en.md new file mode 100644 index 00000000000000..56520ff31d7846 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-autonlp_predict_roi_1_29797730_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English autonlp_predict_roi_1_29797730 RoBertaForSequenceClassification from ds198799 +author: John Snow Labs +name: autonlp_predict_roi_1_29797730 +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`autonlp_predict_roi_1_29797730` is a English model originally trained by ds198799. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/autonlp_predict_roi_1_29797730_en_5.5.0_3.0_1725965840142.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/autonlp_predict_roi_1_29797730_en_5.5.0_3.0_1725965840142.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("autonlp_predict_roi_1_29797730","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("autonlp_predict_roi_1_29797730", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|autonlp_predict_roi_1_29797730| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|424.2 MB| + +## References + +https://huggingface.co/ds198799/autonlp-predict_ROI_1-29797730 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-autotrain_air_sea_services_syn_mixed_data_28092023_91916144627_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-autotrain_air_sea_services_syn_mixed_data_28092023_91916144627_pipeline_en.md new file mode 100644 index 00000000000000..074f531fbbfca0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-autotrain_air_sea_services_syn_mixed_data_28092023_91916144627_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English autotrain_air_sea_services_syn_mixed_data_28092023_91916144627_pipeline pipeline XlmRoBertaForTokenClassification from sxandie +author: John Snow Labs +name: autotrain_air_sea_services_syn_mixed_data_28092023_91916144627_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`autotrain_air_sea_services_syn_mixed_data_28092023_91916144627_pipeline` is a English model originally trained by sxandie. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/autotrain_air_sea_services_syn_mixed_data_28092023_91916144627_pipeline_en_5.5.0_3.0_1725928669896.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/autotrain_air_sea_services_syn_mixed_data_28092023_91916144627_pipeline_en_5.5.0_3.0_1725928669896.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("autotrain_air_sea_services_syn_mixed_data_28092023_91916144627_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("autotrain_air_sea_services_syn_mixed_data_28092023_91916144627_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|autotrain_air_sea_services_syn_mixed_data_28092023_91916144627_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|781.5 MB| + +## References + +https://huggingface.co/sxandie/autotrain-air-sea-services-syn-mixed-data-28092023-91916144627 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-autotrain_nnds7_fkzxh_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-autotrain_nnds7_fkzxh_pipeline_en.md new file mode 100644 index 00000000000000..223909d0ede533 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-autotrain_nnds7_fkzxh_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English autotrain_nnds7_fkzxh_pipeline pipeline MPNetForSequenceClassification from BloodJackson +author: John Snow Labs +name: autotrain_nnds7_fkzxh_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`autotrain_nnds7_fkzxh_pipeline` is a English model originally trained by BloodJackson. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/autotrain_nnds7_fkzxh_pipeline_en_5.5.0_3.0_1725947310865.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/autotrain_nnds7_fkzxh_pipeline_en_5.5.0_3.0_1725947310865.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("autotrain_nnds7_fkzxh_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("autotrain_nnds7_fkzxh_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|autotrain_nnds7_fkzxh_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|410.4 MB| + +## References + +https://huggingface.co/BloodJackson/autotrain-nnds7-fkzxh + +## Included Models + +- DocumentAssembler +- TokenizerModel +- MPNetForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-babyberta_aochildes_french_wikipedia_french_without_masking_seed6_finetuned_squad_en.md b/docs/_posts/ahmedlone127/2024-09-10-babyberta_aochildes_french_wikipedia_french_without_masking_seed6_finetuned_squad_en.md new file mode 100644 index 00000000000000..5aacca4d137961 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-babyberta_aochildes_french_wikipedia_french_without_masking_seed6_finetuned_squad_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English babyberta_aochildes_french_wikipedia_french_without_masking_seed6_finetuned_squad RoBertaForQuestionAnswering from lielbin +author: John Snow Labs +name: babyberta_aochildes_french_wikipedia_french_without_masking_seed6_finetuned_squad +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`babyberta_aochildes_french_wikipedia_french_without_masking_seed6_finetuned_squad` is a English model originally trained by lielbin. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/babyberta_aochildes_french_wikipedia_french_without_masking_seed6_finetuned_squad_en_5.5.0_3.0_1725958854086.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/babyberta_aochildes_french_wikipedia_french_without_masking_seed6_finetuned_squad_en_5.5.0_3.0_1725958854086.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("babyberta_aochildes_french_wikipedia_french_without_masking_seed6_finetuned_squad","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("babyberta_aochildes_french_wikipedia_french_without_masking_seed6_finetuned_squad", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|babyberta_aochildes_french_wikipedia_french_without_masking_seed6_finetuned_squad| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|32.0 MB| + +## References + +https://huggingface.co/lielbin/BabyBERTa-aochildes-french_wikipedia_french-without-Masking-seed6-finetuned-SQuAD \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-bert_base_banking77_pt2_sharmax_vikas_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-bert_base_banking77_pt2_sharmax_vikas_pipeline_en.md new file mode 100644 index 00000000000000..3f1166feec072d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-bert_base_banking77_pt2_sharmax_vikas_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bert_base_banking77_pt2_sharmax_vikas_pipeline pipeline BertForSequenceClassification from sharmax-vikas +author: John Snow Labs +name: bert_base_banking77_pt2_sharmax_vikas_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_banking77_pt2_sharmax_vikas_pipeline` is a English model originally trained by sharmax-vikas. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_banking77_pt2_sharmax_vikas_pipeline_en_5.5.0_3.0_1725957537223.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_banking77_pt2_sharmax_vikas_pipeline_en_5.5.0_3.0_1725957537223.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_base_banking77_pt2_sharmax_vikas_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_base_banking77_pt2_sharmax_vikas_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_banking77_pt2_sharmax_vikas_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|409.6 MB| + +## References + +https://huggingface.co/sharmax-vikas/bert-base-banking77-pt2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- BertForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-bert_base_cased_squad_v1_1_portuguese_v1_1_9_en.md b/docs/_posts/ahmedlone127/2024-09-10-bert_base_cased_squad_v1_1_portuguese_v1_1_9_en.md new file mode 100644 index 00000000000000..8d5c2117563419 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-bert_base_cased_squad_v1_1_portuguese_v1_1_9_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English bert_base_cased_squad_v1_1_portuguese_v1_1_9 BertForQuestionAnswering from alcalazans +author: John Snow Labs +name: bert_base_cased_squad_v1_1_portuguese_v1_1_9 +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, bert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_cased_squad_v1_1_portuguese_v1_1_9` is a English model originally trained by alcalazans. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_cased_squad_v1_1_portuguese_v1_1_9_en_5.5.0_3.0_1725926589912.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_cased_squad_v1_1_portuguese_v1_1_9_en_5.5.0_3.0_1725926589912.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = BertForQuestionAnswering.pretrained("bert_base_cased_squad_v1_1_portuguese_v1_1_9","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = BertForQuestionAnswering.pretrained("bert_base_cased_squad_v1_1_portuguese_v1_1_9", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_cased_squad_v1_1_portuguese_v1_1_9| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|405.9 MB| + +## References + +https://huggingface.co/alcalazans/bert-base-cased-squad-v1.1-portuguese_v1.1.9 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-bert_base_cased_squad_v1_1_portuguese_v1_1_9_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-bert_base_cased_squad_v1_1_portuguese_v1_1_9_pipeline_en.md new file mode 100644 index 00000000000000..9417664869a518 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-bert_base_cased_squad_v1_1_portuguese_v1_1_9_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English bert_base_cased_squad_v1_1_portuguese_v1_1_9_pipeline pipeline BertForQuestionAnswering from alcalazans +author: John Snow Labs +name: bert_base_cased_squad_v1_1_portuguese_v1_1_9_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_cased_squad_v1_1_portuguese_v1_1_9_pipeline` is a English model originally trained by alcalazans. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_cased_squad_v1_1_portuguese_v1_1_9_pipeline_en_5.5.0_3.0_1725926609949.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_cased_squad_v1_1_portuguese_v1_1_9_pipeline_en_5.5.0_3.0_1725926609949.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_base_cased_squad_v1_1_portuguese_v1_1_9_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_base_cased_squad_v1_1_portuguese_v1_1_9_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_cased_squad_v1_1_portuguese_v1_1_9_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.0 MB| + +## References + +https://huggingface.co/alcalazans/bert-base-cased-squad-v1.1-portuguese_v1.1.9 + +## Included Models + +- MultiDocumentAssembler +- BertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-bert_base_kununua_model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-bert_base_kununua_model_pipeline_en.md new file mode 100644 index 00000000000000..268aeb7eb5d1fc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-bert_base_kununua_model_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bert_base_kununua_model_pipeline pipeline RoBertaForSequenceClassification from Alex-GF +author: John Snow Labs +name: bert_base_kununua_model_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_kununua_model_pipeline` is a English model originally trained by Alex-GF. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_kununua_model_pipeline_en_5.5.0_3.0_1725962410441.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_kununua_model_pipeline_en_5.5.0_3.0_1725962410441.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bert_base_kununua_model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bert_base_kununua_model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_kununua_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|308.7 MB| + +## References + +https://huggingface.co/Alex-GF/bert-base-kununua-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-bert_base_qa_model_7up_en.md b/docs/_posts/ahmedlone127/2024-09-10-bert_base_qa_model_7up_en.md new file mode 100644 index 00000000000000..97b79367ccb66d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-bert_base_qa_model_7up_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English bert_base_qa_model_7up DistilBertForQuestionAnswering from cadzchua +author: John Snow Labs +name: bert_base_qa_model_7up +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_qa_model_7up` is a English model originally trained by cadzchua. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_qa_model_7up_en_5.5.0_3.0_1725960376615.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_qa_model_7up_en_5.5.0_3.0_1725960376615.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("bert_base_qa_model_7up","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("bert_base_qa_model_7up", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_qa_model_7up| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/cadzchua/bert-base-qa-model-7up \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-bert_base_turkish_sentiment_analysis_tr.md b/docs/_posts/ahmedlone127/2024-09-10-bert_base_turkish_sentiment_analysis_tr.md new file mode 100644 index 00000000000000..9104f42fcd0861 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-bert_base_turkish_sentiment_analysis_tr.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Turkish bert_base_turkish_sentiment_analysis BertForSequenceClassification from saribasmetehan +author: John Snow Labs +name: bert_base_turkish_sentiment_analysis +date: 2024-09-10 +tags: [tr, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: tr +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_turkish_sentiment_analysis` is a Turkish model originally trained by saribasmetehan. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_turkish_sentiment_analysis_tr_5.5.0_3.0_1725957885173.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_turkish_sentiment_analysis_tr_5.5.0_3.0_1725957885173.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("bert_base_turkish_sentiment_analysis","tr") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("bert_base_turkish_sentiment_analysis", "tr") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_turkish_sentiment_analysis| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|tr| +|Size:|414.5 MB| + +## References + +https://huggingface.co/saribasmetehan/bert-base-turkish-sentiment-analysis \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-bert_base_uncased_ftd_on_glue_qqp_iter_1_en.md b/docs/_posts/ahmedlone127/2024-09-10-bert_base_uncased_ftd_on_glue_qqp_iter_1_en.md new file mode 100644 index 00000000000000..9066636aead128 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-bert_base_uncased_ftd_on_glue_qqp_iter_1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English bert_base_uncased_ftd_on_glue_qqp_iter_1 BertForSequenceClassification from Ibrahim-Alam +author: John Snow Labs +name: bert_base_uncased_ftd_on_glue_qqp_iter_1 +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_base_uncased_ftd_on_glue_qqp_iter_1` is a English model originally trained by Ibrahim-Alam. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_base_uncased_ftd_on_glue_qqp_iter_1_en_5.5.0_3.0_1725957177650.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_base_uncased_ftd_on_glue_qqp_iter_1_en_5.5.0_3.0_1725957177650.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("bert_base_uncased_ftd_on_glue_qqp_iter_1","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("bert_base_uncased_ftd_on_glue_qqp_iter_1", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_base_uncased_ftd_on_glue_qqp_iter_1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/Ibrahim-Alam/bert-base-uncased_FTd_on_glue-qqp_iter-1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-bert_classifier_autonlp_cat333_624217911_zh.md b/docs/_posts/ahmedlone127/2024-09-10-bert_classifier_autonlp_cat333_624217911_zh.md new file mode 100644 index 00000000000000..4b2d7ce0b31fb8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-bert_classifier_autonlp_cat333_624217911_zh.md @@ -0,0 +1,98 @@ +--- +layout: model +title: Chinese BertForSequenceClassification Cased model (from kyleinincubated) +author: John Snow Labs +name: bert_classifier_autonlp_cat333_624217911 +date: 2024-09-10 +tags: [zh, open_source, bert, sequence_classification, classification, onnx] +task: Text Classification +language: zh +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `autonlp-cat333-624217911` is a Chinese model originally trained by `kyleinincubated`. + +## Predicted Entities + +`渔业`, `采矿业`, `公用事业`, `交通运输`, `农业`, `电子制造`, `休闲服务`, `文化`, `商业贸易`, `畜牧业`, `林业`, `轻工制造`, `教育`, `食品饮料`, `化工制造`, `非银金融`, `房地产`, `传媒`, `通信`, `汽车制造`, `信息技术`, `有色金属`, `互联网服务`, `银行`, `纺织服装制造`, `医药生物`, `钢铁`, `建筑业`, `电气设备` + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_classifier_autonlp_cat333_624217911_zh_5.5.0_3.0_1725957339742.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_classifier_autonlp_cat333_624217911_zh_5.5.0_3.0_1725957339742.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +seq_classifier = BertForSequenceClassification.pretrained("bert_classifier_autonlp_cat333_624217911","zh") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("class") + +pipeline = Pipeline(stages=[documentAssembler, tokenizer, seq_classifier]) + +data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new DocumentAssembler() + .setInputCols(Array("text")) + .setOutputCols(Array("document")) + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val seq_classifier = BertForSequenceClassification.pretrained("bert_classifier_autonlp_cat333_624217911","zh") + .setInputCols(Array("document", "token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, seq_classifier)) + +val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text") + +val result = pipeline.fit(data).transform(data) +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_classifier_autonlp_cat333_624217911| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|zh| +|Size:|383.5 MB| + +## References + +References + +- https://huggingface.co/kyleinincubated/autonlp-cat333-624217911 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-bert_finetuned_squadv2_en.md b/docs/_posts/ahmedlone127/2024-09-10-bert_finetuned_squadv2_en.md new file mode 100644 index 00000000000000..096ead1dbaf5c5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-bert_finetuned_squadv2_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English bert_finetuned_squadv2 DistilBertForQuestionAnswering from FuuToru +author: John Snow Labs +name: bert_finetuned_squadv2 +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_finetuned_squadv2` is a English model originally trained by FuuToru. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_finetuned_squadv2_en_5.5.0_3.0_1725931942529.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_finetuned_squadv2_en_5.5.0_3.0_1725931942529.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("bert_finetuned_squadv2","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("bert_finetuned_squadv2", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_finetuned_squadv2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/FuuToru/bert-finetuned-squadv2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-bert_qa_morgana_rodrigues_en.md b/docs/_posts/ahmedlone127/2024-09-10-bert_qa_morgana_rodrigues_en.md new file mode 100644 index 00000000000000..24cbc5d4506b03 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-bert_qa_morgana_rodrigues_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English bert_qa_morgana_rodrigues DistilBertForQuestionAnswering from morgana-rodrigues +author: John Snow Labs +name: bert_qa_morgana_rodrigues +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bert_qa_morgana_rodrigues` is a English model originally trained by morgana-rodrigues. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_qa_morgana_rodrigues_en_5.5.0_3.0_1725932060696.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_qa_morgana_rodrigues_en_5.5.0_3.0_1725932060696.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("bert_qa_morgana_rodrigues","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("bert_qa_morgana_rodrigues", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bert_qa_morgana_rodrigues| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|243.8 MB| + +## References + +https://huggingface.co/morgana-rodrigues/bert_qa \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-best_64_shots_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-best_64_shots_pipeline_en.md new file mode 100644 index 00000000000000..194471397939b2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-best_64_shots_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English best_64_shots_pipeline pipeline MPNetEmbeddings from Nhat1904 +author: John Snow Labs +name: best_64_shots_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`best_64_shots_pipeline` is a English model originally trained by Nhat1904. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/best_64_shots_pipeline_en_5.5.0_3.0_1725963300434.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/best_64_shots_pipeline_en_5.5.0_3.0_1725963300434.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("best_64_shots_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("best_64_shots_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|best_64_shots_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/Nhat1904/best_64_shots + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-bsc_bio_ehr_spanish_symptemist_word2vec_75_ner_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-bsc_bio_ehr_spanish_symptemist_word2vec_75_ner_pipeline_en.md new file mode 100644 index 00000000000000..de9e5ddc89d454 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-bsc_bio_ehr_spanish_symptemist_word2vec_75_ner_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English bsc_bio_ehr_spanish_symptemist_word2vec_75_ner_pipeline pipeline RoBertaForTokenClassification from Rodrigo1771 +author: John Snow Labs +name: bsc_bio_ehr_spanish_symptemist_word2vec_75_ner_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`bsc_bio_ehr_spanish_symptemist_word2vec_75_ner_pipeline` is a English model originally trained by Rodrigo1771. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bsc_bio_ehr_spanish_symptemist_word2vec_75_ner_pipeline_en_5.5.0_3.0_1725948703629.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bsc_bio_ehr_spanish_symptemist_word2vec_75_ner_pipeline_en_5.5.0_3.0_1725948703629.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("bsc_bio_ehr_spanish_symptemist_word2vec_75_ner_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("bsc_bio_ehr_spanish_symptemist_word2vec_75_ner_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bsc_bio_ehr_spanish_symptemist_word2vec_75_ner_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|435.4 MB| + +## References + +https://huggingface.co/Rodrigo1771/bsc-bio-ehr-es-symptemist-word2vec-75-ner + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_eli5_mlm_model_kavan123_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_eli5_mlm_model_kavan123_en.md new file mode 100644 index 00000000000000..bcb62cb4b71246 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_eli5_mlm_model_kavan123_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_awesome_eli5_mlm_model_kavan123 RoBertaEmbeddings from Kavan123 +author: John Snow Labs +name: burmese_awesome_eli5_mlm_model_kavan123 +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_eli5_mlm_model_kavan123` is a English model originally trained by Kavan123. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_eli5_mlm_model_kavan123_en_5.5.0_3.0_1725930664022.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_eli5_mlm_model_kavan123_en_5.5.0_3.0_1725930664022.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("burmese_awesome_eli5_mlm_model_kavan123","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("burmese_awesome_eli5_mlm_model_kavan123","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_eli5_mlm_model_kavan123| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|306.5 MB| + +## References + +https://huggingface.co/Kavan123/my_awesome_eli5_mlm_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_eli5_mlm_model_kavan123_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_eli5_mlm_model_kavan123_pipeline_en.md new file mode 100644 index 00000000000000..dcd8daddc2ecdf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_eli5_mlm_model_kavan123_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English burmese_awesome_eli5_mlm_model_kavan123_pipeline pipeline RoBertaEmbeddings from Kavan123 +author: John Snow Labs +name: burmese_awesome_eli5_mlm_model_kavan123_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_eli5_mlm_model_kavan123_pipeline` is a English model originally trained by Kavan123. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_eli5_mlm_model_kavan123_pipeline_en_5.5.0_3.0_1725930678829.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_eli5_mlm_model_kavan123_pipeline_en_5.5.0_3.0_1725930678829.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_eli5_mlm_model_kavan123_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_eli5_mlm_model_kavan123_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_eli5_mlm_model_kavan123_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|306.5 MB| + +## References + +https://huggingface.co/Kavan123/my_awesome_eli5_mlm_model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_abdullah_ii_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_abdullah_ii_pipeline_en.md new file mode 100644 index 00000000000000..00205597f5ab07 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_abdullah_ii_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_awesome_qa_model_abdullah_ii_pipeline pipeline DistilBertForQuestionAnswering from Abdullah-ii +author: John Snow Labs +name: burmese_awesome_qa_model_abdullah_ii_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_abdullah_ii_pipeline` is a English model originally trained by Abdullah-ii. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_abdullah_ii_pipeline_en_5.5.0_3.0_1725931911689.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_abdullah_ii_pipeline_en_5.5.0_3.0_1725931911689.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_qa_model_abdullah_ii_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_qa_model_abdullah_ii_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_abdullah_ii_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Abdullah-ii/my_awesome_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_darren1105_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_darren1105_en.md new file mode 100644 index 00000000000000..41051d7683db9a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_darren1105_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_darren1105 DistilBertForQuestionAnswering from darren1105 +author: John Snow Labs +name: burmese_awesome_qa_model_darren1105 +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_darren1105` is a English model originally trained by darren1105. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_darren1105_en_5.5.0_3.0_1725931868805.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_darren1105_en_5.5.0_3.0_1725931868805.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_darren1105","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_darren1105", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_darren1105| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/darren1105/my_awesome_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_darren1105_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_darren1105_pipeline_en.md new file mode 100644 index 00000000000000..da89e8531ffa92 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_darren1105_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_awesome_qa_model_darren1105_pipeline pipeline DistilBertForQuestionAnswering from darren1105 +author: John Snow Labs +name: burmese_awesome_qa_model_darren1105_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_darren1105_pipeline` is a English model originally trained by darren1105. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_darren1105_pipeline_en_5.5.0_3.0_1725931880826.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_darren1105_pipeline_en_5.5.0_3.0_1725931880826.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_qa_model_darren1105_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_qa_model_darren1105_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_darren1105_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/darren1105/my_awesome_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_derf989_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_derf989_en.md new file mode 100644 index 00000000000000..8090d8026df65c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_derf989_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_derf989 DistilBertForQuestionAnswering from Derf989 +author: John Snow Labs +name: burmese_awesome_qa_model_derf989 +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_derf989` is a English model originally trained by Derf989. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_derf989_en_5.5.0_3.0_1725932249770.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_derf989_en_5.5.0_3.0_1725932249770.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_derf989","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_derf989", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_derf989| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Derf989/my_awesome_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_fsghs_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_fsghs_en.md new file mode 100644 index 00000000000000..f39c35910254f1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_fsghs_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_fsghs DistilBertForQuestionAnswering from fsghs +author: John Snow Labs +name: burmese_awesome_qa_model_fsghs +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_fsghs` is a English model originally trained by fsghs. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_fsghs_en_5.5.0_3.0_1725960418862.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_fsghs_en_5.5.0_3.0_1725960418862.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_fsghs","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_fsghs", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_fsghs| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/fsghs/my_awesome_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_fsghs_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_fsghs_pipeline_en.md new file mode 100644 index 00000000000000..a106e1af4dfa55 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_fsghs_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_awesome_qa_model_fsghs_pipeline pipeline DistilBertForQuestionAnswering from fsghs +author: John Snow Labs +name: burmese_awesome_qa_model_fsghs_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_fsghs_pipeline` is a English model originally trained by fsghs. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_fsghs_pipeline_en_5.5.0_3.0_1725960430815.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_fsghs_pipeline_en_5.5.0_3.0_1725960430815.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_qa_model_fsghs_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_qa_model_fsghs_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_fsghs_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/fsghs/my_awesome_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_herutriana44_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_herutriana44_pipeline_en.md new file mode 100644 index 00000000000000..af88130a7e144b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_herutriana44_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_awesome_qa_model_herutriana44_pipeline pipeline DistilBertForQuestionAnswering from herutriana44 +author: John Snow Labs +name: burmese_awesome_qa_model_herutriana44_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_herutriana44_pipeline` is a English model originally trained by herutriana44. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_herutriana44_pipeline_en_5.5.0_3.0_1725932519363.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_herutriana44_pipeline_en_5.5.0_3.0_1725932519363.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_qa_model_herutriana44_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_qa_model_herutriana44_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_herutriana44_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/herutriana44/my_awesome_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_ivan3ol_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_ivan3ol_en.md new file mode 100644 index 00000000000000..cdd5b61c5d1edb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_ivan3ol_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_ivan3ol DistilBertForQuestionAnswering from ivan3ol +author: John Snow Labs +name: burmese_awesome_qa_model_ivan3ol +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_ivan3ol` is a English model originally trained by ivan3ol. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_ivan3ol_en_5.5.0_3.0_1725932330491.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_ivan3ol_en_5.5.0_3.0_1725932330491.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_ivan3ol","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_ivan3ol", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_ivan3ol| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/ivan3ol/my_awesome_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_jleung1618_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_jleung1618_en.md new file mode 100644 index 00000000000000..3e9f40c0fcd25e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_jleung1618_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_jleung1618 DistilBertForQuestionAnswering from JLeung1618 +author: John Snow Labs +name: burmese_awesome_qa_model_jleung1618 +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_jleung1618` is a English model originally trained by JLeung1618. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_jleung1618_en_5.5.0_3.0_1725960555035.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_jleung1618_en_5.5.0_3.0_1725960555035.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_jleung1618","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_jleung1618", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_jleung1618| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/JLeung1618/my_awesome_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_sachinsharma0325_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_sachinsharma0325_en.md new file mode 100644 index 00000000000000..e9cbcad9797c30 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_sachinsharma0325_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_awesome_qa_model_sachinsharma0325 DistilBertForQuestionAnswering from SachinSharma0325 +author: John Snow Labs +name: burmese_awesome_qa_model_sachinsharma0325 +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_sachinsharma0325` is a English model originally trained by SachinSharma0325. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_sachinsharma0325_en_5.5.0_3.0_1725960178008.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_sachinsharma0325_en_5.5.0_3.0_1725960178008.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_sachinsharma0325","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("burmese_awesome_qa_model_sachinsharma0325", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_sachinsharma0325| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/SachinSharma0325/my_awesome_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_shiftinglegs_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_shiftinglegs_pipeline_en.md new file mode 100644 index 00000000000000..60af66202aa635 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_awesome_qa_model_shiftinglegs_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_awesome_qa_model_shiftinglegs_pipeline pipeline DistilBertForQuestionAnswering from ShiftingLegs +author: John Snow Labs +name: burmese_awesome_qa_model_shiftinglegs_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_awesome_qa_model_shiftinglegs_pipeline` is a English model originally trained by ShiftingLegs. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_shiftinglegs_pipeline_en_5.5.0_3.0_1725932148333.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_awesome_qa_model_shiftinglegs_pipeline_en_5.5.0_3.0_1725932148333.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_awesome_qa_model_shiftinglegs_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_awesome_qa_model_shiftinglegs_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_awesome_qa_model_shiftinglegs_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/ShiftingLegs/my_awesome_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_mps_roberta_based_model_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_mps_roberta_based_model_en.md new file mode 100644 index 00000000000000..2113fbea6943d7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_mps_roberta_based_model_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English burmese_mps_roberta_based_model RoBertaEmbeddings from MS-Huang0714 +author: John Snow Labs +name: burmese_mps_roberta_based_model +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_mps_roberta_based_model` is a English model originally trained by MS-Huang0714. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_mps_roberta_based_model_en_5.5.0_3.0_1725930722311.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_mps_roberta_based_model_en_5.5.0_3.0_1725930722311.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("burmese_mps_roberta_based_model","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("burmese_mps_roberta_based_model","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_mps_roberta_based_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|465.5 MB| + +## References + +https://huggingface.co/MS-Huang0714/my-MPS-roberta-based_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_setfit_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_setfit_en.md new file mode 100644 index 00000000000000..6f160880ba3a0a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_setfit_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English burmese_setfit MPNetEmbeddings from shrikritisingh +author: John Snow Labs +name: burmese_setfit +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_setfit` is a English model originally trained by shrikritisingh. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_setfit_en_5.5.0_3.0_1725963665790.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_setfit_en_5.5.0_3.0_1725963665790.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("burmese_setfit","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("burmese_setfit","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_setfit| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/shrikritisingh/my-setfit \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-burmese_setfit_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-burmese_setfit_pipeline_en.md new file mode 100644 index 00000000000000..30b7d81bc1a91f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-burmese_setfit_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English burmese_setfit_pipeline pipeline MPNetEmbeddings from shrikritisingh +author: John Snow Labs +name: burmese_setfit_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`burmese_setfit_pipeline` is a English model originally trained by shrikritisingh. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/burmese_setfit_pipeline_en_5.5.0_3.0_1725963687407.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/burmese_setfit_pipeline_en_5.5.0_3.0_1725963687407.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("burmese_setfit_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("burmese_setfit_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|burmese_setfit_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/shrikritisingh/my-setfit + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-candle_cvss_integrity_en.md b/docs/_posts/ahmedlone127/2024-09-10-candle_cvss_integrity_en.md new file mode 100644 index 00000000000000..7b269169845e93 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-candle_cvss_integrity_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English candle_cvss_integrity MPNetForSequenceClassification from iashour +author: John Snow Labs +name: candle_cvss_integrity +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, mpnet] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`candle_cvss_integrity` is a English model originally trained by iashour. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/candle_cvss_integrity_en_5.5.0_3.0_1725947555422.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/candle_cvss_integrity_en_5.5.0_3.0_1725947555422.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = MPNetForSequenceClassification.pretrained("candle_cvss_integrity","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = MPNetForSequenceClassification.pretrained("candle_cvss_integrity", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|candle_cvss_integrity| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.5 MB| + +## References + +https://huggingface.co/iashour/CANDLE_cvss_integrity \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-climatebert_finetuned_qa_policy_long_en.md b/docs/_posts/ahmedlone127/2024-09-10-climatebert_finetuned_qa_policy_long_en.md new file mode 100644 index 00000000000000..adeccb29420e81 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-climatebert_finetuned_qa_policy_long_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English climatebert_finetuned_qa_policy_long RoBertaForQuestionAnswering from peter2000 +author: John Snow Labs +name: climatebert_finetuned_qa_policy_long +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`climatebert_finetuned_qa_policy_long` is a English model originally trained by peter2000. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/climatebert_finetuned_qa_policy_long_en_5.5.0_3.0_1725958743782.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/climatebert_finetuned_qa_policy_long_en_5.5.0_3.0_1725958743782.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("climatebert_finetuned_qa_policy_long","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("climatebert_finetuned_qa_policy_long", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|climatebert_finetuned_qa_policy_long| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|307.4 MB| + +## References + +https://huggingface.co/peter2000/climateBert-finetuned-qa-policy_long \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-coha1960s_en.md b/docs/_posts/ahmedlone127/2024-09-10-coha1960s_en.md new file mode 100644 index 00000000000000..3c6f091baf5210 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-coha1960s_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English coha1960s RoBertaEmbeddings from simonmun +author: John Snow Labs +name: coha1960s +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`coha1960s` is a English model originally trained by simonmun. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/coha1960s_en_5.5.0_3.0_1725930917079.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/coha1960s_en_5.5.0_3.0_1725930917079.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("coha1960s","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("coha1960s","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|coha1960s| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|311.6 MB| + +## References + +https://huggingface.co/simonmun/COHA1960s \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-coha1960s_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-coha1960s_pipeline_en.md new file mode 100644 index 00000000000000..6ee97b8b9c468a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-coha1960s_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English coha1960s_pipeline pipeline RoBertaEmbeddings from simonmun +author: John Snow Labs +name: coha1960s_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`coha1960s_pipeline` is a English model originally trained by simonmun. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/coha1960s_pipeline_en_5.5.0_3.0_1725930931492.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/coha1960s_pipeline_en_5.5.0_3.0_1725930931492.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("coha1960s_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("coha1960s_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|coha1960s_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|311.6 MB| + +## References + +https://huggingface.co/simonmun/COHA1960s + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-cot_ep3_35_en.md b/docs/_posts/ahmedlone127/2024-09-10-cot_ep3_35_en.md new file mode 100644 index 00000000000000..bf5c7a3c697bae --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-cot_ep3_35_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English cot_ep3_35 MPNetEmbeddings from ingeol +author: John Snow Labs +name: cot_ep3_35 +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cot_ep3_35` is a English model originally trained by ingeol. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cot_ep3_35_en_5.5.0_3.0_1725963845139.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cot_ep3_35_en_5.5.0_3.0_1725963845139.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("cot_ep3_35","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("cot_ep3_35","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cot_ep3_35| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/ingeol/cot_ep3_35 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-covid_vaccine_sentiment_analysis_roberta_model_newtonkimathi_en.md b/docs/_posts/ahmedlone127/2024-09-10-covid_vaccine_sentiment_analysis_roberta_model_newtonkimathi_en.md new file mode 100644 index 00000000000000..8498f73ff319f6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-covid_vaccine_sentiment_analysis_roberta_model_newtonkimathi_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English covid_vaccine_sentiment_analysis_roberta_model_newtonkimathi RoBertaForSequenceClassification from NewtonKimathi +author: John Snow Labs +name: covid_vaccine_sentiment_analysis_roberta_model_newtonkimathi +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`covid_vaccine_sentiment_analysis_roberta_model_newtonkimathi` is a English model originally trained by NewtonKimathi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/covid_vaccine_sentiment_analysis_roberta_model_newtonkimathi_en_5.5.0_3.0_1725962342327.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/covid_vaccine_sentiment_analysis_roberta_model_newtonkimathi_en_5.5.0_3.0_1725962342327.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("covid_vaccine_sentiment_analysis_roberta_model_newtonkimathi","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("covid_vaccine_sentiment_analysis_roberta_model_newtonkimathi", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|covid_vaccine_sentiment_analysis_roberta_model_newtonkimathi| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|436.0 MB| + +## References + +https://huggingface.co/NewtonKimathi/Covid_Vaccine_Sentiment_Analysis_Roberta_Model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-cropwiz_qa_model_en.md b/docs/_posts/ahmedlone127/2024-09-10-cropwiz_qa_model_en.md new file mode 100644 index 00000000000000..84146e91164298 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-cropwiz_qa_model_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English cropwiz_qa_model DistilBertForQuestionAnswering from Thenghuy +author: John Snow Labs +name: cropwiz_qa_model +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cropwiz_qa_model` is a English model originally trained by Thenghuy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cropwiz_qa_model_en_5.5.0_3.0_1725931917894.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cropwiz_qa_model_en_5.5.0_3.0_1725931917894.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("cropwiz_qa_model","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("cropwiz_qa_model", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cropwiz_qa_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Thenghuy/cropwiz_qa_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-cropwiz_qa_model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-cropwiz_qa_model_pipeline_en.md new file mode 100644 index 00000000000000..07a8381c31b378 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-cropwiz_qa_model_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English cropwiz_qa_model_pipeline pipeline DistilBertForQuestionAnswering from Thenghuy +author: John Snow Labs +name: cropwiz_qa_model_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cropwiz_qa_model_pipeline` is a English model originally trained by Thenghuy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cropwiz_qa_model_pipeline_en_5.5.0_3.0_1725931930930.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cropwiz_qa_model_pipeline_en_5.5.0_3.0_1725931930930.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("cropwiz_qa_model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("cropwiz_qa_model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cropwiz_qa_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Thenghuy/cropwiz_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-cros_2_en.md b/docs/_posts/ahmedlone127/2024-09-10-cros_2_en.md new file mode 100644 index 00000000000000..2078d9d758ded5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-cros_2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English cros_2 RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: cros_2 +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cros_2` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cros_2_en_5.5.0_3.0_1725962039231.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cros_2_en_5.5.0_3.0_1725962039231.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("cros_2","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("cros_2", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cros_2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Cros_2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-cuad_distil_document_name_cased_08_31_v1_en.md b/docs/_posts/ahmedlone127/2024-09-10-cuad_distil_document_name_cased_08_31_v1_en.md new file mode 100644 index 00000000000000..d0a2b2652774a4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-cuad_distil_document_name_cased_08_31_v1_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English cuad_distil_document_name_cased_08_31_v1 DistilBertForQuestionAnswering from saraks +author: John Snow Labs +name: cuad_distil_document_name_cased_08_31_v1 +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cuad_distil_document_name_cased_08_31_v1` is a English model originally trained by saraks. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cuad_distil_document_name_cased_08_31_v1_en_5.5.0_3.0_1725960065991.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cuad_distil_document_name_cased_08_31_v1_en_5.5.0_3.0_1725960065991.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("cuad_distil_document_name_cased_08_31_v1","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("cuad_distil_document_name_cased_08_31_v1", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cuad_distil_document_name_cased_08_31_v1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|243.8 MB| + +## References + +https://huggingface.co/saraks/cuad-distil-document_name-cased-08-31-v1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-cuad_distil_parties_cased_08_31_v1_en.md b/docs/_posts/ahmedlone127/2024-09-10-cuad_distil_parties_cased_08_31_v1_en.md new file mode 100644 index 00000000000000..c72375dbbd2c63 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-cuad_distil_parties_cased_08_31_v1_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English cuad_distil_parties_cased_08_31_v1 DistilBertForQuestionAnswering from saraks +author: John Snow Labs +name: cuad_distil_parties_cased_08_31_v1 +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`cuad_distil_parties_cased_08_31_v1` is a English model originally trained by saraks. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/cuad_distil_parties_cased_08_31_v1_en_5.5.0_3.0_1725931945636.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/cuad_distil_parties_cased_08_31_v1_en_5.5.0_3.0_1725931945636.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("cuad_distil_parties_cased_08_31_v1","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("cuad_distil_parties_cased_08_31_v1", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|cuad_distil_parties_cased_08_31_v1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|243.8 MB| + +## References + +https://huggingface.co/saraks/cuad-distil-parties-cased-08-31-v1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-dataset_en.md b/docs/_posts/ahmedlone127/2024-09-10-dataset_en.md new file mode 100644 index 00000000000000..a5d912f6dbb29c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-dataset_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English dataset DistilBertForQuestionAnswering from ajaydvrj +author: John Snow Labs +name: dataset +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dataset` is a English model originally trained by ajaydvrj. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dataset_en_5.5.0_3.0_1725932240458.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dataset_en_5.5.0_3.0_1725932240458.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("dataset","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("dataset", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dataset| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/ajaydvrj/dataset \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-ddi_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-ddi_pipeline_en.md new file mode 100644 index 00000000000000..908988db52059d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-ddi_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English ddi_pipeline pipeline RoBertaForSequenceClassification from Zabbonat +author: John Snow Labs +name: ddi_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`ddi_pipeline` is a English model originally trained by Zabbonat. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/ddi_pipeline_en_5.5.0_3.0_1725962431227.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/ddi_pipeline_en_5.5.0_3.0_1725962431227.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("ddi_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("ddi_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|ddi_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|430.5 MB| + +## References + +https://huggingface.co/Zabbonat/DDI + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_cased_qa_squad2_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_cased_qa_squad2_en.md new file mode 100644 index 00000000000000..20a962e54d7530 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_cased_qa_squad2_en.md @@ -0,0 +1,102 @@ +--- +layout: model +title: English DistilBertForQuestionAnswering model +author: John Snow Labs +name: distilbert_base_cased_qa_squad2 +date: 2024-09-10 +tags: [open_source, distilbert, question_answering, en, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained Question Answering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `distilbert-base-cased-distilled-squad` is a English model originally trained by Hugging Face. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_cased_qa_squad2_en_5.5.0_3.0_1725946219459.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_cased_qa_squad2_en_5.5.0_3.0_1725946219459.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = MultiDocumentAssembler() \ +.setInputCols(["question", "context"]) \ +.setOutputCols(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_cased_qa_squad2","en") \ +.setInputCols(["document_question", "document_context"]) \ +.setOutputCol("answer")\ +.setCaseSensitive(True) + +pipeline = Pipeline(stages=[documentAssembler, spanClassifier]) + +data = spark.createDataFrame([["What is my name?", "My name is Clara and I live in Berkeley."]]).toDF("question", "context") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new MultiDocumentAssembler() +.setInputCols(Array("question", "context")) +.setOutputCols(Array("document_question", "document_context")) + +val spanClassifer = DistilBertForQuestionAnswering.pretrained("distilbert_base_cased_qa_squad2","en") +.setInputCols(Array("document", "token")) +.setOutputCol("answer") +.setCaseSensitive(true) + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) + +val data = Seq("What is my name?", "My name is Clara and I live in Berkeley.").toDF("question", "context") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("en.answer_question.squadv2.distil_bert.base_cased").predict("""What is my name?|||"My name is Clara and I live in Berkeley.""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_cased_qa_squad2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|243.6 MB| +|Case sensitive:|false| +|Max sentence length:|512| + +## References + +References + +References + +https://huggingface.co/distilbert-base-cased-distilled-squad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_aarnow_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_aarnow_en.md new file mode 100644 index 00000000000000..6cb058b960e6f7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_aarnow_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_aarnow DistilBertEmbeddings from aarnow +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_aarnow +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_aarnow` is a English model originally trained by aarnow. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_aarnow_en_5.5.0_3.0_1725946358721.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_aarnow_en_5.5.0_3.0_1725946358721.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_aarnow","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_aarnow","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_aarnow| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/aarnow/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_accelerate_dangurangu_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_accelerate_dangurangu_pipeline_en.md new file mode 100644 index 00000000000000..05e77fba622491 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_accelerate_dangurangu_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_accelerate_dangurangu_pipeline pipeline DistilBertEmbeddings from Dangurangu +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_accelerate_dangurangu_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_accelerate_dangurangu_pipeline` is a English model originally trained by Dangurangu. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_dangurangu_pipeline_en_5.5.0_3.0_1725933060230.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_dangurangu_pipeline_en_5.5.0_3.0_1725933060230.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_accelerate_dangurangu_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_accelerate_dangurangu_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_accelerate_dangurangu_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Dangurangu/distilbert-base-uncased-finetuned-imdb-accelerate + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_accelerate_dreamuno_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_accelerate_dreamuno_pipeline_en.md new file mode 100644 index 00000000000000..30b9704f917f56 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_accelerate_dreamuno_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_accelerate_dreamuno_pipeline pipeline DistilBertEmbeddings from Dreamuno +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_accelerate_dreamuno_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_accelerate_dreamuno_pipeline` is a English model originally trained by Dreamuno. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_dreamuno_pipeline_en_5.5.0_3.0_1725933063820.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_dreamuno_pipeline_en_5.5.0_3.0_1725933063820.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_accelerate_dreamuno_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_accelerate_dreamuno_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_accelerate_dreamuno_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Dreamuno/distilbert-base-uncased-finetuned-imdb-accelerate + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_accelerate_msong_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_accelerate_msong_en.md new file mode 100644 index 00000000000000..0c819618b709f4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_accelerate_msong_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_accelerate_msong DistilBertEmbeddings from msong +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_accelerate_msong +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_accelerate_msong` is a English model originally trained by msong. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_msong_en_5.5.0_3.0_1725933130700.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_accelerate_msong_en_5.5.0_3.0_1725933130700.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_accelerate_msong","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_accelerate_msong","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_accelerate_msong| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/msong/distilbert-base-uncased-finetuned-imdb-accelerate \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_jeph864_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_jeph864_pipeline_en.md new file mode 100644 index 00000000000000..f7d5a89e6c56c3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_jeph864_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_jeph864_pipeline pipeline DistilBertEmbeddings from jeph864 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_jeph864_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_jeph864_pipeline` is a English model originally trained by jeph864. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_jeph864_pipeline_en_5.5.0_3.0_1725933475867.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_jeph864_pipeline_en_5.5.0_3.0_1725933475867.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_jeph864_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_jeph864_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_jeph864_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/jeph864/distilbert-base-uncased-finetuned-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_jin_cheon_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_jin_cheon_en.md new file mode 100644 index 00000000000000..e52438ed10f159 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_jin_cheon_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_jin_cheon DistilBertEmbeddings from jin-cheon +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_jin_cheon +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_jin_cheon` is a English model originally trained by jin-cheon. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_jin_cheon_en_5.5.0_3.0_1725946595530.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_jin_cheon_en_5.5.0_3.0_1725946595530.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_jin_cheon","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_jin_cheon","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_jin_cheon| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/jin-cheon/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_jin_cheon_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_jin_cheon_pipeline_en.md new file mode 100644 index 00000000000000..679d73e4938eb5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_jin_cheon_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_jin_cheon_pipeline pipeline DistilBertEmbeddings from jin-cheon +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_jin_cheon_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_jin_cheon_pipeline` is a English model originally trained by jin-cheon. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_jin_cheon_pipeline_en_5.5.0_3.0_1725946608030.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_jin_cheon_pipeline_en_5.5.0_3.0_1725946608030.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_jin_cheon_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_jin_cheon_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_jin_cheon_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/jin-cheon/distilbert-base-uncased-finetuned-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_keylazy_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_keylazy_en.md new file mode 100644 index 00000000000000..dde380addf4fdf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_keylazy_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_keylazy DistilBertEmbeddings from keylazy +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_keylazy +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_keylazy` is a English model originally trained by keylazy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_keylazy_en_5.5.0_3.0_1725933373958.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_keylazy_en_5.5.0_3.0_1725933373958.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_keylazy","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_keylazy","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_keylazy| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/keylazy/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_owentaku_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_owentaku_pipeline_en.md new file mode 100644 index 00000000000000..4a10d1937b83b2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_owentaku_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_owentaku_pipeline pipeline DistilBertEmbeddings from Owentaku +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_owentaku_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_owentaku_pipeline` is a English model originally trained by Owentaku. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_owentaku_pipeline_en_5.5.0_3.0_1725933481175.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_owentaku_pipeline_en_5.5.0_3.0_1725933481175.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_owentaku_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_owentaku_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_owentaku_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Owentaku/distilbert-base-uncased-finetuned-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_smallfish166_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_smallfish166_en.md new file mode 100644 index 00000000000000..c1f83830cb5383 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_smallfish166_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_smallfish166 DistilBertEmbeddings from smallfish166 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_smallfish166 +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_smallfish166` is a English model originally trained by smallfish166. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_smallfish166_en_5.5.0_3.0_1725946358536.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_smallfish166_en_5.5.0_3.0_1725946358536.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_smallfish166","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_smallfish166","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_smallfish166| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/smallfish166/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_smallfish166_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_smallfish166_pipeline_en.md new file mode 100644 index 00000000000000..a3a7ebf4584f22 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_smallfish166_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_smallfish166_pipeline pipeline DistilBertEmbeddings from smallfish166 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_smallfish166_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_smallfish166_pipeline` is a English model originally trained by smallfish166. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_smallfish166_pipeline_en_5.5.0_3.0_1725946371396.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_smallfish166_pipeline_en_5.5.0_3.0_1725946371396.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_smallfish166_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_smallfish166_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_smallfish166_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/smallfish166/distilbert-base-uncased-finetuned-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_spokkazo_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_spokkazo_en.md new file mode 100644 index 00000000000000..d53bb18e4a16b2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_spokkazo_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_spokkazo DistilBertEmbeddings from spokkazo +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_spokkazo +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_spokkazo` is a English model originally trained by spokkazo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_spokkazo_en_5.5.0_3.0_1725935442852.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_spokkazo_en_5.5.0_3.0_1725935442852.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_spokkazo","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_spokkazo","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_spokkazo| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/spokkazo/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_ssv273_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_ssv273_en.md new file mode 100644 index 00000000000000..5f3261c0dc0d23 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_ssv273_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_ssv273 DistilBertEmbeddings from ssv273 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_ssv273 +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_ssv273` is a English model originally trained by ssv273. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_ssv273_en_5.5.0_3.0_1725946671955.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_ssv273_en_5.5.0_3.0_1725946671955.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_ssv273","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_ssv273","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_ssv273| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/ssv273/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_ssv273_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_ssv273_pipeline_en.md new file mode 100644 index 00000000000000..94c9f986893749 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_ssv273_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_ssv273_pipeline pipeline DistilBertEmbeddings from ssv273 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_ssv273_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_ssv273_pipeline` is a English model originally trained by ssv273. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_ssv273_pipeline_en_5.5.0_3.0_1725946684675.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_ssv273_pipeline_en_5.5.0_3.0_1725946684675.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_ssv273_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_ssv273_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_ssv273_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/ssv273/distilbert-base-uncased-finetuned-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_vasaicrow_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_vasaicrow_pipeline_en.md new file mode 100644 index 00000000000000..4e9318d886ba2b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_vasaicrow_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_vasaicrow_pipeline pipeline DistilBertEmbeddings from vasaicrow +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_vasaicrow_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_vasaicrow_pipeline` is a English model originally trained by vasaicrow. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_vasaicrow_pipeline_en_5.5.0_3.0_1725933118733.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_vasaicrow_pipeline_en_5.5.0_3.0_1725933118733.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_vasaicrow_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_vasaicrow_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_vasaicrow_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/vasaicrow/distilbert-base-uncased-finetuned-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_y_oguchi_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_y_oguchi_en.md new file mode 100644 index 00000000000000..79b321dceda331 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_y_oguchi_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_y_oguchi DistilBertEmbeddings from y-oguchi +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_y_oguchi +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_y_oguchi` is a English model originally trained by y-oguchi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_y_oguchi_en_5.5.0_3.0_1725933373033.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_y_oguchi_en_5.5.0_3.0_1725933373033.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_y_oguchi","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_base_uncased_finetuned_imdb_y_oguchi","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_y_oguchi| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/y-oguchi/distilbert-base-uncased-finetuned-imdb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_yaojingguo_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_yaojingguo_pipeline_en.md new file mode 100644 index 00000000000000..f6be6a6b91faa1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_imdb_yaojingguo_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_imdb_yaojingguo_pipeline pipeline DistilBertEmbeddings from yaojingguo +author: John Snow Labs +name: distilbert_base_uncased_finetuned_imdb_yaojingguo_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_imdb_yaojingguo_pipeline` is a English model originally trained by yaojingguo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_yaojingguo_pipeline_en_5.5.0_3.0_1725946614790.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_imdb_yaojingguo_pipeline_en_5.5.0_3.0_1725946614790.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_yaojingguo_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_imdb_yaojingguo_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_imdb_yaojingguo_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/yaojingguo/distilbert-base-uncased-finetuned-imdb + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_masked_financial_reports_sec_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_masked_financial_reports_sec_pipeline_en.md new file mode 100644 index 00000000000000..50c50452c4079a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_masked_financial_reports_sec_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_masked_financial_reports_sec_pipeline pipeline DistilBertEmbeddings from Aventicity +author: John Snow Labs +name: distilbert_base_uncased_finetuned_masked_financial_reports_sec_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_masked_financial_reports_sec_pipeline` is a English model originally trained by Aventicity. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_masked_financial_reports_sec_pipeline_en_5.5.0_3.0_1725946704117.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_masked_financial_reports_sec_pipeline_en_5.5.0_3.0_1725946704117.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_masked_financial_reports_sec_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_masked_financial_reports_sec_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_masked_financial_reports_sec_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Aventicity/distilbert-base-uncased-finetuned-Masked-financial_reports_sec + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_craigdsouza_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_craigdsouza_pipeline_en.md new file mode 100644 index 00000000000000..a8b3c66cc0870d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_craigdsouza_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_craigdsouza_pipeline pipeline DistilBertForQuestionAnswering from craigdsouza +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_craigdsouza_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_craigdsouza_pipeline` is a English model originally trained by craigdsouza. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_craigdsouza_pipeline_en_5.5.0_3.0_1725931875723.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_craigdsouza_pipeline_en_5.5.0_3.0_1725931875723.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_squad_craigdsouza_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_squad_craigdsouza_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_craigdsouza_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/craigdsouza/distilbert-base-uncased-finetuned-squad + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_edw144_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_edw144_pipeline_en.md new file mode 100644 index 00000000000000..b1d72473ea3010 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_edw144_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_edw144_pipeline pipeline DistilBertForQuestionAnswering from edw144 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_edw144_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_edw144_pipeline` is a English model originally trained by edw144. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_edw144_pipeline_en_5.5.0_3.0_1725960477237.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_edw144_pipeline_en_5.5.0_3.0_1725960477237.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_squad_edw144_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_squad_edw144_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_edw144_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/edw144/distilbert-base-uncased-finetuned-squad + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_jstotz64_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_jstotz64_en.md new file mode 100644 index 00000000000000..ceb784067d819d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_jstotz64_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_jstotz64 DistilBertForQuestionAnswering from jstotz64 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_jstotz64 +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_jstotz64` is a English model originally trained by jstotz64. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_jstotz64_en_5.5.0_3.0_1725932145190.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_jstotz64_en_5.5.0_3.0_1725932145190.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_jstotz64","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_jstotz64", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_jstotz64| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/jstotz64/distilbert-base-uncased-finetuned-squad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_jstotz64_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_jstotz64_pipeline_en.md new file mode 100644 index 00000000000000..c4bdef19bdb375 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_jstotz64_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_jstotz64_pipeline pipeline DistilBertForQuestionAnswering from jstotz64 +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_jstotz64_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_jstotz64_pipeline` is a English model originally trained by jstotz64. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_jstotz64_pipeline_en_5.5.0_3.0_1725932157928.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_jstotz64_pipeline_en_5.5.0_3.0_1725932157928.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_squad_jstotz64_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_squad_jstotz64_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_jstotz64_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/jstotz64/distilbert-base-uncased-finetuned-squad + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_orgilj_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_orgilj_en.md new file mode 100644 index 00000000000000..e5122722043f69 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_orgilj_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_orgilj DistilBertForQuestionAnswering from orgilj +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_orgilj +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_orgilj` is a English model originally trained by orgilj. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_orgilj_en_5.5.0_3.0_1725960601090.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_orgilj_en_5.5.0_3.0_1725960601090.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_orgilj","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_orgilj", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_orgilj| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/orgilj/distilbert-base-uncased-finetuned-squad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_superlazycoder_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_superlazycoder_en.md new file mode 100644 index 00000000000000..6252be7ae848da --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_finetuned_squad_superlazycoder_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distilbert_base_uncased_finetuned_squad_superlazycoder DistilBertForQuestionAnswering from superlazycoder +author: John Snow Labs +name: distilbert_base_uncased_finetuned_squad_superlazycoder +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_finetuned_squad_superlazycoder` is a English model originally trained by superlazycoder. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_superlazycoder_en_5.5.0_3.0_1725960595099.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_finetuned_squad_superlazycoder_en_5.5.0_3.0_1725960595099.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_superlazycoder","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_finetuned_squad_superlazycoder", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_finetuned_squad_superlazycoder| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/superlazycoder/distilbert-base-uncased-finetuned-squad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_qa_finetune_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_qa_finetune_pipeline_en.md new file mode 100644 index 00000000000000..68fc9a90dc021a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_qa_finetune_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_base_uncased_qa_finetune_pipeline pipeline DistilBertForQuestionAnswering from BlueDruddigon +author: John Snow Labs +name: distilbert_base_uncased_qa_finetune_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_qa_finetune_pipeline` is a English model originally trained by BlueDruddigon. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_qa_finetune_pipeline_en_5.5.0_3.0_1725932033527.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_qa_finetune_pipeline_en_5.5.0_3.0_1725932033527.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_qa_finetune_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_qa_finetune_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_qa_finetune_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/BlueDruddigon/distilbert-base-uncased-qa-finetune + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_squad2_p95_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_squad2_p95_en.md new file mode 100644 index 00000000000000..eae5a14b9ee857 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_squad2_p95_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distilbert_base_uncased_squad2_p95 DistilBertForQuestionAnswering from pminha +author: John Snow Labs +name: distilbert_base_uncased_squad2_p95 +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_squad2_p95` is a English model originally trained by pminha. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_squad2_p95_en_5.5.0_3.0_1725932242805.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_squad2_p95_en_5.5.0_3.0_1725932242805.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_squad2_p95","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_base_uncased_squad2_p95", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_squad2_p95| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|112.4 MB| + +## References + +https://huggingface.co/pminha/distilbert-base-uncased-squad2-p95 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_squad2_p95_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_squad2_p95_pipeline_en.md new file mode 100644 index 00000000000000..b6b00785320a8f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_base_uncased_squad2_p95_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_base_uncased_squad2_p95_pipeline pipeline DistilBertForQuestionAnswering from pminha +author: John Snow Labs +name: distilbert_base_uncased_squad2_p95_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_base_uncased_squad2_p95_pipeline` is a English model originally trained by pminha. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_squad2_p95_pipeline_en_5.5.0_3.0_1725932254112.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_base_uncased_squad2_p95_pipeline_en_5.5.0_3.0_1725932254112.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_base_uncased_squad2_p95_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_base_uncased_squad2_p95_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_base_uncased_squad2_p95_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|112.4 MB| + +## References + +https://huggingface.co/pminha/distilbert-base-uncased-squad2-p95 + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_qa_mysquadv2_8Jan22_finetuned_squad_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_qa_mysquadv2_8Jan22_finetuned_squad_en.md new file mode 100644 index 00000000000000..8c1333a97137b2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_qa_mysquadv2_8Jan22_finetuned_squad_en.md @@ -0,0 +1,98 @@ +--- +layout: model +title: English DistilBertForQuestionAnswering model (from threem) Squad2 +author: John Snow Labs +name: distilbert_qa_mysquadv2_8Jan22_finetuned_squad +date: 2024-09-10 +tags: [en, open_source, distilbert, question_answering, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained Question Answering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `mysquadv2_8Jan22-finetuned-squad` is a English model originally trained by `threem`. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_qa_mysquadv2_8Jan22_finetuned_squad_en_5.5.0_3.0_1725932032360.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_qa_mysquadv2_8Jan22_finetuned_squad_en_5.5.0_3.0_1725932032360.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = MultiDocumentAssembler() \ +.setInputCols(["question", "context"]) \ +.setOutputCols(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbert_qa_mysquadv2_8Jan22_finetuned_squad","en") \ +.setInputCols(["document_question", "document_context"]) \ +.setOutputCol("answer")\ +.setCaseSensitive(True) + +pipeline = Pipeline(stages=[documentAssembler, spanClassifier]) + +data = spark.createDataFrame([["What is my name?", "My name is Clara and I live in Berkeley."]]).toDF("question", "context") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new MultiDocumentAssembler() +.setInputCols(Array("question", "context")) +.setOutputCols(Array("document_question", "document_context")) + +val spanClassifer = DistilBertForQuestionAnswering.pretrained("distilbert_qa_mysquadv2_8Jan22_finetuned_squad","en") +.setInputCols(Array("document", "token")) +.setOutputCol("answer") +.setCaseSensitive(true) + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) + +val data = Seq("What is my name?", "My name is Clara and I live in Berkeley.").toDF("question", "context") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("en.answer_question.squadv2.distil_bert.v2.by_threem").predict("""What is my name?|||"My name is Clara and I live in Berkeley.""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_qa_mysquadv2_8Jan22_finetuned_squad| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|243.8 MB| + +## References + +References + +- https://huggingface.co/threem/mysquadv2_8Jan22-finetuned-squad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_qa_mysquadv2_8Jan22_finetuned_squad_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_qa_mysquadv2_8Jan22_finetuned_squad_pipeline_en.md new file mode 100644 index 00000000000000..62a0bcde3d44ed --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_qa_mysquadv2_8Jan22_finetuned_squad_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbert_qa_mysquadv2_8Jan22_finetuned_squad_pipeline pipeline DistilBertForQuestionAnswering from threem +author: John Snow Labs +name: distilbert_qa_mysquadv2_8Jan22_finetuned_squad_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_qa_mysquadv2_8Jan22_finetuned_squad_pipeline` is a English model originally trained by threem. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_qa_mysquadv2_8Jan22_finetuned_squad_pipeline_en_5.5.0_3.0_1725932043150.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_qa_mysquadv2_8Jan22_finetuned_squad_pipeline_en_5.5.0_3.0_1725932043150.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbert_qa_mysquadv2_8Jan22_finetuned_squad_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbert_qa_mysquadv2_8Jan22_finetuned_squad_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_qa_mysquadv2_8Jan22_finetuned_squad_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|243.8 MB| + +## References + +https://huggingface.co/threem/mysquadv2_8Jan22-finetuned-squad + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbert_word2vec_256k_mlm_500k_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbert_word2vec_256k_mlm_500k_en.md new file mode 100644 index 00000000000000..d4d00bc78e7182 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbert_word2vec_256k_mlm_500k_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English distilbert_word2vec_256k_mlm_500k DistilBertEmbeddings from vocab-transformers +author: John Snow Labs +name: distilbert_word2vec_256k_mlm_500k +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbert_word2vec_256k_mlm_500k` is a English model originally trained by vocab-transformers. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbert_word2vec_256k_mlm_500k_en_5.5.0_3.0_1725946455861.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbert_word2vec_256k_mlm_500k_en_5.5.0_3.0_1725946455861.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("distilbert_word2vec_256k_mlm_500k","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("distilbert_word2vec_256k_mlm_500k","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbert_word2vec_256k_mlm_500k| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|902.0 MB| + +## References + +https://huggingface.co/vocab-transformers/distilbert-word2vec_256k-MLM_500k \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbertfinetunehsfifteenepoch_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbertfinetunehsfifteenepoch_en.md new file mode 100644 index 00000000000000..b5aee2fa7ec3c6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbertfinetunehsfifteenepoch_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distilbertfinetunehsfifteenepoch DistilBertForQuestionAnswering from KarthikAlagarsamy +author: John Snow Labs +name: distilbertfinetunehsfifteenepoch +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbertfinetunehsfifteenepoch` is a English model originally trained by KarthikAlagarsamy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbertfinetunehsfifteenepoch_en_5.5.0_3.0_1725932132638.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbertfinetunehsfifteenepoch_en_5.5.0_3.0_1725932132638.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbertfinetunehsfifteenepoch","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("distilbertfinetunehsfifteenepoch", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbertfinetunehsfifteenepoch| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/KarthikAlagarsamy/distilbertfinetuneHSfifteenepoch \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distilbertfinetunehsfifteenepoch_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-distilbertfinetunehsfifteenepoch_pipeline_en.md new file mode 100644 index 00000000000000..2c3295b1219e43 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distilbertfinetunehsfifteenepoch_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English distilbertfinetunehsfifteenepoch_pipeline pipeline DistilBertForQuestionAnswering from KarthikAlagarsamy +author: John Snow Labs +name: distilbertfinetunehsfifteenepoch_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distilbertfinetunehsfifteenepoch_pipeline` is a English model originally trained by KarthikAlagarsamy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbertfinetunehsfifteenepoch_pipeline_en_5.5.0_3.0_1725932144197.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbertfinetunehsfifteenepoch_pipeline_en_5.5.0_3.0_1725932144197.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("distilbertfinetunehsfifteenepoch_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("distilbertfinetunehsfifteenepoch_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbertfinetunehsfifteenepoch_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/KarthikAlagarsamy/distilbertfinetuneHSfifteenepoch + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-distortion_model_en.md b/docs/_posts/ahmedlone127/2024-09-10-distortion_model_en.md new file mode 100644 index 00000000000000..fe96c9ea639a9f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-distortion_model_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English distortion_model MPNetEmbeddings from marco-gancitano +author: John Snow Labs +name: distortion_model +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`distortion_model` is a English model originally trained by marco-gancitano. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distortion_model_en_5.5.0_3.0_1725936886110.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distortion_model_en_5.5.0_3.0_1725936886110.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("distortion_model","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("distortion_model","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distortion_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/marco-gancitano/distortion-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-dock_0_en.md b/docs/_posts/ahmedlone127/2024-09-10-dock_0_en.md new file mode 100644 index 00000000000000..b654d48c6c19b0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-dock_0_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dock_0 RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: dock_0 +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dock_0` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dock_0_en_5.5.0_3.0_1725964966837.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dock_0_en_5.5.0_3.0_1725964966837.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("dock_0","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("dock_0", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dock_0| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Dock_0 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-dock_0_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-dock_0_pipeline_en.md new file mode 100644 index 00000000000000..90c62c94166abd --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-dock_0_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dock_0_pipeline pipeline RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: dock_0_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dock_0_pipeline` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dock_0_pipeline_en_5.5.0_3.0_1725964990865.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dock_0_pipeline_en_5.5.0_3.0_1725964990865.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dock_0_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dock_0_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dock_0_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Dock_0 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-dock_4_en.md b/docs/_posts/ahmedlone127/2024-09-10-dock_4_en.md new file mode 100644 index 00000000000000..80d8b77434b71c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-dock_4_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dock_4 RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: dock_4 +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dock_4` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dock_4_en_5.5.0_3.0_1725962516925.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dock_4_en_5.5.0_3.0_1725962516925.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("dock_4","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("dock_4", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dock_4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Dock_4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-dummy_model_aman_1210_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_aman_1210_pipeline_en.md new file mode 100644 index 00000000000000..089b172e0b912c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_aman_1210_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_aman_1210_pipeline pipeline CamemBertEmbeddings from aman-1210 +author: John Snow Labs +name: dummy_model_aman_1210_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_aman_1210_pipeline` is a English model originally trained by aman-1210. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_aman_1210_pipeline_en_5.5.0_3.0_1725939121901.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_aman_1210_pipeline_en_5.5.0_3.0_1725939121901.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_aman_1210_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_aman_1210_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_aman_1210_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/aman-1210/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-dummy_model_dylwil3_en.md b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_dylwil3_en.md new file mode 100644 index 00000000000000..cb6f19559ace4b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_dylwil3_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_dylwil3 CamemBertEmbeddings from dylwil3 +author: John Snow Labs +name: dummy_model_dylwil3 +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_dylwil3` is a English model originally trained by dylwil3. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_dylwil3_en_5.5.0_3.0_1725938478863.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_dylwil3_en_5.5.0_3.0_1725938478863.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_dylwil3","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_dylwil3","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_dylwil3| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/dylwil3/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-dummy_model_mrsteffe_en.md b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_mrsteffe_en.md new file mode 100644 index 00000000000000..97710bd9a5f45b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_mrsteffe_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English dummy_model_mrsteffe CamemBertEmbeddings from MrSteffe +author: John Snow Labs +name: dummy_model_mrsteffe +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, camembert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_mrsteffe` is a English model originally trained by MrSteffe. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_mrsteffe_en_5.5.0_3.0_1725938615690.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_mrsteffe_en_5.5.0_3.0_1725938615690.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("dummy_model_mrsteffe","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("dummy_model_mrsteffe","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_mrsteffe| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/MrSteffe/dummy-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-dummy_model_qingspring_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_qingspring_pipeline_en.md new file mode 100644 index 00000000000000..961103af66dbee --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_qingspring_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_qingspring_pipeline pipeline CamemBertEmbeddings from Qingspring +author: John Snow Labs +name: dummy_model_qingspring_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_qingspring_pipeline` is a English model originally trained by Qingspring. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_qingspring_pipeline_en_5.5.0_3.0_1725938877358.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_qingspring_pipeline_en_5.5.0_3.0_1725938877358.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_qingspring_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_qingspring_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_qingspring_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/Qingspring/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-dummy_model_safik_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_safik_pipeline_en.md new file mode 100644 index 00000000000000..d9d97a7ceee6a7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_safik_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_safik_pipeline pipeline CamemBertEmbeddings from safik +author: John Snow Labs +name: dummy_model_safik_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_safik_pipeline` is a English model originally trained by safik. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_safik_pipeline_en_5.5.0_3.0_1725938456751.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_safik_pipeline_en_5.5.0_3.0_1725938456751.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_safik_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_safik_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_safik_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/safik/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-dummy_model_sanjay1234_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_sanjay1234_pipeline_en.md new file mode 100644 index 00000000000000..8508ed8eaa5ff6 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_sanjay1234_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_sanjay1234_pipeline pipeline CamemBertEmbeddings from Sanjay1234 +author: John Snow Labs +name: dummy_model_sanjay1234_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_sanjay1234_pipeline` is a English model originally trained by Sanjay1234. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_sanjay1234_pipeline_en_5.5.0_3.0_1725939031404.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_sanjay1234_pipeline_en_5.5.0_3.0_1725939031404.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_sanjay1234_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_sanjay1234_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_sanjay1234_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/Sanjay1234/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-dummy_model_sayakadak24_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_sayakadak24_pipeline_en.md new file mode 100644 index 00000000000000..c9d70d9ab69e36 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_sayakadak24_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_sayakadak24_pipeline pipeline CamemBertEmbeddings from sayakadak24 +author: John Snow Labs +name: dummy_model_sayakadak24_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_sayakadak24_pipeline` is a English model originally trained by sayakadak24. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_sayakadak24_pipeline_en_5.5.0_3.0_1725938834362.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_sayakadak24_pipeline_en_5.5.0_3.0_1725938834362.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_sayakadak24_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_sayakadak24_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_sayakadak24_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/sayakadak24/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-dummy_model_zaimazarnaz14_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_zaimazarnaz14_pipeline_en.md new file mode 100644 index 00000000000000..f1eefc0193f338 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-dummy_model_zaimazarnaz14_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English dummy_model_zaimazarnaz14_pipeline pipeline CamemBertEmbeddings from zaimazarnaz14 +author: John Snow Labs +name: dummy_model_zaimazarnaz14_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`dummy_model_zaimazarnaz14_pipeline` is a English model originally trained by zaimazarnaz14. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/dummy_model_zaimazarnaz14_pipeline_en_5.5.0_3.0_1725938617536.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/dummy_model_zaimazarnaz14_pipeline_en_5.5.0_3.0_1725938617536.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("dummy_model_zaimazarnaz14_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("dummy_model_zaimazarnaz14_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|dummy_model_zaimazarnaz14_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|264.0 MB| + +## References + +https://huggingface.co/zaimazarnaz14/dummy-model + +## Included Models + +- DocumentAssembler +- TokenizerModel +- CamemBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-e2e_deployment_en.md b/docs/_posts/ahmedlone127/2024-09-10-e2e_deployment_en.md new file mode 100644 index 00000000000000..3c45f72d04e4c0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-e2e_deployment_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English e2e_deployment BertForSequenceClassification from SamagraDataGov +author: John Snow Labs +name: e2e_deployment +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`e2e_deployment` is a English model originally trained by SamagraDataGov. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/e2e_deployment_en_5.5.0_3.0_1725957646155.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/e2e_deployment_en_5.5.0_3.0_1725957646155.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("e2e_deployment","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("e2e_deployment", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|e2e_deployment| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/SamagraDataGov/e2e_deployment \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-efficient_mlm_m0_05_en.md b/docs/_posts/ahmedlone127/2024-09-10-efficient_mlm_m0_05_en.md new file mode 100644 index 00000000000000..934dd81b5e89c8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-efficient_mlm_m0_05_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English efficient_mlm_m0_05 RoBertaEmbeddings from rzhai +author: John Snow Labs +name: efficient_mlm_m0_05 +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`efficient_mlm_m0_05` is a English model originally trained by rzhai. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/efficient_mlm_m0_05_en_5.5.0_3.0_1725930938880.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/efficient_mlm_m0_05_en_5.5.0_3.0_1725930938880.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("efficient_mlm_m0_05","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("efficient_mlm_m0_05","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|efficient_mlm_m0_05| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|837.5 MB| + +## References + +https://huggingface.co/rzhai/efficient_mlm_m0.05 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-efficient_mlm_m0_50_en.md b/docs/_posts/ahmedlone127/2024-09-10-efficient_mlm_m0_50_en.md new file mode 100644 index 00000000000000..a0319ef1d48d87 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-efficient_mlm_m0_50_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English efficient_mlm_m0_50 RoBertaEmbeddings from princeton-nlp +author: John Snow Labs +name: efficient_mlm_m0_50 +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`efficient_mlm_m0_50` is a English model originally trained by princeton-nlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/efficient_mlm_m0_50_en_5.5.0_3.0_1725931193711.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/efficient_mlm_m0_50_en_5.5.0_3.0_1725931193711.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("efficient_mlm_m0_50","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("efficient_mlm_m0_50","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|efficient_mlm_m0_50| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|844.8 MB| + +## References + +https://huggingface.co/princeton-nlp/efficient_mlm_m0.50 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-efficient_mlm_m0_50_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-efficient_mlm_m0_50_pipeline_en.md new file mode 100644 index 00000000000000..dc719cac8b88bf --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-efficient_mlm_m0_50_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English efficient_mlm_m0_50_pipeline pipeline RoBertaEmbeddings from princeton-nlp +author: John Snow Labs +name: efficient_mlm_m0_50_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`efficient_mlm_m0_50_pipeline` is a English model originally trained by princeton-nlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/efficient_mlm_m0_50_pipeline_en_5.5.0_3.0_1725931431617.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/efficient_mlm_m0_50_pipeline_en_5.5.0_3.0_1725931431617.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("efficient_mlm_m0_50_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("efficient_mlm_m0_50_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|efficient_mlm_m0_50_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|844.8 MB| + +## References + +https://huggingface.co/princeton-nlp/efficient_mlm_m0.50 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-electra_qa_base_v2_finetuned_korquad_384_ko.md b/docs/_posts/ahmedlone127/2024-09-10-electra_qa_base_v2_finetuned_korquad_384_ko.md new file mode 100644 index 00000000000000..95e9cf77b38166 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-electra_qa_base_v2_finetuned_korquad_384_ko.md @@ -0,0 +1,98 @@ +--- +layout: model +title: Korean ElectraForQuestionAnswering model (from monologg) Version-2 +author: John Snow Labs +name: electra_qa_base_v2_finetuned_korquad_384 +date: 2024-09-10 +tags: [ko, open_source, electra, question_answering, onnx] +task: Question Answering +language: ko +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained Question Answering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `koelectra-base-v2-finetuned-korquad-384` is a Korean model originally trained by `monologg`. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/electra_qa_base_v2_finetuned_korquad_384_ko_5.5.0_3.0_1725926757744.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/electra_qa_base_v2_finetuned_korquad_384_ko_5.5.0_3.0_1725926757744.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +documentAssembler = MultiDocumentAssembler() \ +.setInputCols(["question", "context"]) \ +.setOutputCols(["document_question", "document_context"]) + +spanClassifier = BertForQuestionAnswering.pretrained("electra_qa_base_v2_finetuned_korquad_384","ko") \ +.setInputCols(["document_question", "document_context"]) \ +.setOutputCol("answer")\ +.setCaseSensitive(True) + +pipeline = Pipeline(stages=[documentAssembler, spanClassifier]) + +data = spark.createDataFrame([["내 이름은 무엇입니까?", "제 이름은 클라라이고 저는 버클리에 살고 있습니다."]]).toDF("question", "context") + +result = pipeline.fit(data).transform(data) +``` +```scala +val documentAssembler = new MultiDocumentAssembler() +.setInputCols(Array("question", "context")) +.setOutputCols(Array("document_question", "document_context")) + +val spanClassifer = BertForQuestionAnswering.pretrained("electra_qa_base_v2_finetuned_korquad_384","ko") +.setInputCols(Array("document", "token")) +.setOutputCol("answer") +.setCaseSensitive(true) + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) + +val data = Seq("내 이름은 무엇입니까?", "제 이름은 클라라이고 저는 버클리에 살고 있습니다.").toDF("question", "context") + +val result = pipeline.fit(data).transform(data) +``` + +{:.nlu-block} +```python +import nlu +nlu.load("ko.answer_question.korquad.electra.base_v2_384.by_monologg").predict("""내 이름은 무엇입니까?|||"제 이름은 클라라이고 저는 버클리에 살고 있습니다.""") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|electra_qa_base_v2_finetuned_korquad_384| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|ko| +|Size:|411.8 MB| + +## References + +References + +- https://huggingface.co/monologg/koelectra-base-v2-finetuned-korquad-384 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-english_astitchtask1a_robertabase_falsetrue_0_0_best_en.md b/docs/_posts/ahmedlone127/2024-09-10-english_astitchtask1a_robertabase_falsetrue_0_0_best_en.md new file mode 100644 index 00000000000000..4477cf551584c5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-english_astitchtask1a_robertabase_falsetrue_0_0_best_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English english_astitchtask1a_robertabase_falsetrue_0_0_best RoBertaForSequenceClassification from harish +author: John Snow Labs +name: english_astitchtask1a_robertabase_falsetrue_0_0_best +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`english_astitchtask1a_robertabase_falsetrue_0_0_best` is a English model originally trained by harish. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/english_astitchtask1a_robertabase_falsetrue_0_0_best_en_5.5.0_3.0_1725966154145.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/english_astitchtask1a_robertabase_falsetrue_0_0_best_en_5.5.0_3.0_1725966154145.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("english_astitchtask1a_robertabase_falsetrue_0_0_best","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("english_astitchtask1a_robertabase_falsetrue_0_0_best", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|english_astitchtask1a_robertabase_falsetrue_0_0_best| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|436.2 MB| + +## References + +https://huggingface.co/harish/EN-AStitchTask1A-RoBERTaBase-FalseTrue-0-0-BEST \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-esteler_distilbert_indonesian_id.md b/docs/_posts/ahmedlone127/2024-09-10-esteler_distilbert_indonesian_id.md new file mode 100644 index 00000000000000..0c8eec21a0c2b5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-esteler_distilbert_indonesian_id.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Indonesian esteler_distilbert_indonesian DistilBertEmbeddings from zaenalium +author: John Snow Labs +name: esteler_distilbert_indonesian +date: 2024-09-10 +tags: [id, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: id +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`esteler_distilbert_indonesian` is a Indonesian model originally trained by zaenalium. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/esteler_distilbert_indonesian_id_5.5.0_3.0_1725946522308.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/esteler_distilbert_indonesian_id_5.5.0_3.0_1725946522308.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("esteler_distilbert_indonesian","id") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("esteler_distilbert_indonesian","id") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|esteler_distilbert_indonesian| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|id| +|Size:|307.6 MB| + +## References + +https://huggingface.co/zaenalium/Esteler-DistilBERT-id \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-financebert_en.md b/docs/_posts/ahmedlone127/2024-09-10-financebert_en.md new file mode 100644 index 00000000000000..838cc4209b2294 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-financebert_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English financebert BertForSequenceClassification from marcev +author: John Snow Labs +name: financebert +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`financebert` is a English model originally trained by marcev. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/financebert_en_5.5.0_3.0_1725957127960.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/financebert_en_5.5.0_3.0_1725957127960.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("financebert","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("financebert", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|financebert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/marcev/financebert \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-fine_tune_whisper_kagglex_pipeline_hi.md b/docs/_posts/ahmedlone127/2024-09-10-fine_tune_whisper_kagglex_pipeline_hi.md new file mode 100644 index 00000000000000..b4bf4f6ec16282 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-fine_tune_whisper_kagglex_pipeline_hi.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Hindi fine_tune_whisper_kagglex_pipeline pipeline WhisperForCTC from SakshiRathi77 +author: John Snow Labs +name: fine_tune_whisper_kagglex_pipeline +date: 2024-09-10 +tags: [hi, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: hi +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fine_tune_whisper_kagglex_pipeline` is a Hindi model originally trained by SakshiRathi77. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fine_tune_whisper_kagglex_pipeline_hi_5.5.0_3.0_1725953862815.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fine_tune_whisper_kagglex_pipeline_hi_5.5.0_3.0_1725953862815.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("fine_tune_whisper_kagglex_pipeline", lang = "hi") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("fine_tune_whisper_kagglex_pipeline", lang = "hi") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fine_tune_whisper_kagglex_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|hi| +|Size:|1.7 GB| + +## References + +https://huggingface.co/SakshiRathi77/Fine-tune-Whisper-Kagglex + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-fine_tuned_qas_squad_2_with_roberta_large_en.md b/docs/_posts/ahmedlone127/2024-09-10-fine_tuned_qas_squad_2_with_roberta_large_en.md new file mode 100644 index 00000000000000..cb364944ec7534 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-fine_tuned_qas_squad_2_with_roberta_large_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English fine_tuned_qas_squad_2_with_roberta_large RoBertaForQuestionAnswering from muhammadravi251001 +author: John Snow Labs +name: fine_tuned_qas_squad_2_with_roberta_large +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fine_tuned_qas_squad_2_with_roberta_large` is a English model originally trained by muhammadravi251001. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fine_tuned_qas_squad_2_with_roberta_large_en_5.5.0_3.0_1725958557394.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fine_tuned_qas_squad_2_with_roberta_large_en_5.5.0_3.0_1725958557394.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("fine_tuned_qas_squad_2_with_roberta_large","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("fine_tuned_qas_squad_2_with_roberta_large", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fine_tuned_qas_squad_2_with_roberta_large| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/muhammadravi251001/fine-tuned-QAS-Squad_2-with-roberta-large \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-fine_tuning_nlp_en.md b/docs/_posts/ahmedlone127/2024-09-10-fine_tuning_nlp_en.md new file mode 100644 index 00000000000000..d5bc44103aa033 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-fine_tuning_nlp_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English fine_tuning_nlp RoBertaForSequenceClassification from ianlaauu +author: John Snow Labs +name: fine_tuning_nlp +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fine_tuning_nlp` is a English model originally trained by ianlaauu. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fine_tuning_nlp_en_5.5.0_3.0_1725965827686.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fine_tuning_nlp_en_5.5.0_3.0_1725965827686.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("fine_tuning_nlp","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("fine_tuning_nlp", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fine_tuning_nlp| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|435.0 MB| + +## References + +https://huggingface.co/ianlaauu/fine-tuning-NLP \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-finetuned_model_kunalmod_en.md b/docs/_posts/ahmedlone127/2024-09-10-finetuned_model_kunalmod_en.md new file mode 100644 index 00000000000000..389dc7075abf4c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-finetuned_model_kunalmod_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English finetuned_model_kunalmod RoBertaForQuestionAnswering from Kunalmod +author: John Snow Labs +name: finetuned_model_kunalmod +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`finetuned_model_kunalmod` is a English model originally trained by Kunalmod. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/finetuned_model_kunalmod_en_5.5.0_3.0_1725959454587.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/finetuned_model_kunalmod_en_5.5.0_3.0_1725959454587.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("finetuned_model_kunalmod","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("finetuned_model_kunalmod", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|finetuned_model_kunalmod| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|463.6 MB| + +## References + +https://huggingface.co/Kunalmod/finetuned-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-fintwitbert_sentiment_stephanakkerman_en.md b/docs/_posts/ahmedlone127/2024-09-10-fintwitbert_sentiment_stephanakkerman_en.md new file mode 100644 index 00000000000000..c5f2c374e6a664 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-fintwitbert_sentiment_stephanakkerman_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English fintwitbert_sentiment_stephanakkerman BertForSequenceClassification from StephanAkkerman +author: John Snow Labs +name: fintwitbert_sentiment_stephanakkerman +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`fintwitbert_sentiment_stephanakkerman` is a English model originally trained by StephanAkkerman. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/fintwitbert_sentiment_stephanakkerman_en_5.5.0_3.0_1725957742972.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/fintwitbert_sentiment_stephanakkerman_en_5.5.0_3.0_1725957742972.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("fintwitbert_sentiment_stephanakkerman","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("fintwitbert_sentiment_stephanakkerman", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|fintwitbert_sentiment_stephanakkerman| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|411.7 MB| + +## References + +https://huggingface.co/StephanAkkerman/FinTwitBERT-sentiment \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-gdpr_consent_agreement_en.md b/docs/_posts/ahmedlone127/2024-09-10-gdpr_consent_agreement_en.md new file mode 100644 index 00000000000000..c4fedc86347131 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-gdpr_consent_agreement_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English gdpr_consent_agreement BertForSequenceClassification from rdhinaz +author: John Snow Labs +name: gdpr_consent_agreement +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`gdpr_consent_agreement` is a English model originally trained by rdhinaz. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/gdpr_consent_agreement_en_5.5.0_3.0_1725957559890.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/gdpr_consent_agreement_en_5.5.0_3.0_1725957559890.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("gdpr_consent_agreement","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("gdpr_consent_agreement", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|gdpr_consent_agreement| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/rdhinaz/gdpr_consent_agreement \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-hate_detector_en.md b/docs/_posts/ahmedlone127/2024-09-10-hate_detector_en.md new file mode 100644 index 00000000000000..3e281b872271be --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-hate_detector_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English hate_detector RoBertaForSequenceClassification from ishaansharma +author: John Snow Labs +name: hate_detector +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`hate_detector` is a English model originally trained by ishaansharma. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/hate_detector_en_5.5.0_3.0_1725962613883.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/hate_detector_en_5.5.0_3.0_1725962613883.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("hate_detector","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("hate_detector", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|hate_detector| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.1 MB| + +## References + +https://huggingface.co/ishaansharma/hate-detector \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-hate_hate_balance_random0_seed1_twitter_roberta_base_dec2020_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-hate_hate_balance_random0_seed1_twitter_roberta_base_dec2020_pipeline_en.md new file mode 100644 index 00000000000000..a981f5e3d8fa39 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-hate_hate_balance_random0_seed1_twitter_roberta_base_dec2020_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English hate_hate_balance_random0_seed1_twitter_roberta_base_dec2020_pipeline pipeline RoBertaForSequenceClassification from tweettemposhift +author: John Snow Labs +name: hate_hate_balance_random0_seed1_twitter_roberta_base_dec2020_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`hate_hate_balance_random0_seed1_twitter_roberta_base_dec2020_pipeline` is a English model originally trained by tweettemposhift. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/hate_hate_balance_random0_seed1_twitter_roberta_base_dec2020_pipeline_en_5.5.0_3.0_1725966195814.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/hate_hate_balance_random0_seed1_twitter_roberta_base_dec2020_pipeline_en_5.5.0_3.0_1725966195814.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("hate_hate_balance_random0_seed1_twitter_roberta_base_dec2020_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("hate_hate_balance_random0_seed1_twitter_roberta_base_dec2020_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|hate_hate_balance_random0_seed1_twitter_roberta_base_dec2020_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.3 MB| + +## References + +https://huggingface.co/tweettemposhift/hate-hate_balance_random0_seed1-twitter-roberta-base-dec2020 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-headline_similarities_en.md b/docs/_posts/ahmedlone127/2024-09-10-headline_similarities_en.md new file mode 100644 index 00000000000000..0d2741947c313f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-headline_similarities_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English headline_similarities MPNetEmbeddings from valurank +author: John Snow Labs +name: headline_similarities +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`headline_similarities` is a English model originally trained by valurank. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/headline_similarities_en_5.5.0_3.0_1725963621797.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/headline_similarities_en_5.5.0_3.0_1725963621797.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("headline_similarities","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("headline_similarities","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|headline_similarities| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.7 MB| + +## References + +https://huggingface.co/valurank/headline_similarities \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-histbert_finetuned_ner_en.md b/docs/_posts/ahmedlone127/2024-09-10-histbert_finetuned_ner_en.md new file mode 100644 index 00000000000000..6e56282268fd3d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-histbert_finetuned_ner_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English histbert_finetuned_ner BertForTokenClassification from crina-t +author: John Snow Labs +name: histbert_finetuned_ner +date: 2024-09-10 +tags: [en, open_source, onnx, token_classification, bert, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`histbert_finetuned_ner` is a English model originally trained by crina-t. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/histbert_finetuned_ner_en_5.5.0_3.0_1725955667825.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/histbert_finetuned_ner_en_5.5.0_3.0_1725955667825.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = BertForTokenClassification.pretrained("histbert_finetuned_ner","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = BertForTokenClassification.pretrained("histbert_finetuned_ner", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|histbert_finetuned_ner| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|504.9 MB| + +## References + +https://huggingface.co/crina-t/histbert-finetuned-ner \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-icd_10_code_prediction_en.md b/docs/_posts/ahmedlone127/2024-09-10-icd_10_code_prediction_en.md new file mode 100644 index 00000000000000..647cad1d292193 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-icd_10_code_prediction_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English icd_10_code_prediction BertForSequenceClassification from AkshatSurolia +author: John Snow Labs +name: icd_10_code_prediction +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`icd_10_code_prediction` is a English model originally trained by AkshatSurolia. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/icd_10_code_prediction_en_5.5.0_3.0_1725957892360.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/icd_10_code_prediction_en_5.5.0_3.0_1725957892360.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("icd_10_code_prediction","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("icd_10_code_prediction", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|icd_10_code_prediction| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|451.5 MB| + +## References + +https://huggingface.co/AkshatSurolia/ICD-10-Code-Prediction \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-italian_legal_bert_sardinian_it.md b/docs/_posts/ahmedlone127/2024-09-10-italian_legal_bert_sardinian_it.md new file mode 100644 index 00000000000000..d1177143097814 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-italian_legal_bert_sardinian_it.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Italian italian_legal_bert_sardinian CamemBertEmbeddings from dlicari +author: John Snow Labs +name: italian_legal_bert_sardinian +date: 2024-09-10 +tags: [it, open_source, onnx, embeddings, camembert] +task: Embeddings +language: it +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: CamemBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained CamemBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`italian_legal_bert_sardinian` is a Italian model originally trained by dlicari. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/italian_legal_bert_sardinian_it_5.5.0_3.0_1725939541898.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/italian_legal_bert_sardinian_it_5.5.0_3.0_1725939541898.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = CamemBertEmbeddings.pretrained("italian_legal_bert_sardinian","it") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = CamemBertEmbeddings.pretrained("italian_legal_bert_sardinian","it") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|italian_legal_bert_sardinian| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[camembert]| +|Language:|it| +|Size:|412.4 MB| + +## References + +https://huggingface.co/dlicari/Italian-Legal-BERT-SC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-jai_shri_ram_finetuned_squad_en.md b/docs/_posts/ahmedlone127/2024-09-10-jai_shri_ram_finetuned_squad_en.md new file mode 100644 index 00000000000000..3a440e3d95b366 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-jai_shri_ram_finetuned_squad_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English jai_shri_ram_finetuned_squad DistilBertForQuestionAnswering from Rahul1703 +author: John Snow Labs +name: jai_shri_ram_finetuned_squad +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`jai_shri_ram_finetuned_squad` is a English model originally trained by Rahul1703. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/jai_shri_ram_finetuned_squad_en_5.5.0_3.0_1725960267423.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/jai_shri_ram_finetuned_squad_en_5.5.0_3.0_1725960267423.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("jai_shri_ram_finetuned_squad","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("jai_shri_ram_finetuned_squad", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|jai_shri_ram_finetuned_squad| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Rahul1703/JAI_SHRI_RAM-finetuned-squad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-legal_bert_small_cuad_en.md b/docs/_posts/ahmedlone127/2024-09-10-legal_bert_small_cuad_en.md new file mode 100644 index 00000000000000..a527009faa16a7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-legal_bert_small_cuad_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English legal_bert_small_cuad BertForQuestionAnswering from alex-apostolo +author: John Snow Labs +name: legal_bert_small_cuad +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, bert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`legal_bert_small_cuad` is a English model originally trained by alex-apostolo. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/legal_bert_small_cuad_en_5.5.0_3.0_1725926656191.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/legal_bert_small_cuad_en_5.5.0_3.0_1725926656191.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = BertForQuestionAnswering.pretrained("legal_bert_small_cuad","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = BertForQuestionAnswering.pretrained("legal_bert_small_cuad", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|legal_bert_small_cuad| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|130.6 MB| + +## References + +https://huggingface.co/alex-apostolo/legal-bert-small-cuad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-lettuce_sayula_popoluca_dutch_xlm_en.md b/docs/_posts/ahmedlone127/2024-09-10-lettuce_sayula_popoluca_dutch_xlm_en.md new file mode 100644 index 00000000000000..363f21ae6892fa --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-lettuce_sayula_popoluca_dutch_xlm_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English lettuce_sayula_popoluca_dutch_xlm XlmRoBertaForTokenClassification from pranaydeeps +author: John Snow Labs +name: lettuce_sayula_popoluca_dutch_xlm +date: 2024-09-10 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lettuce_sayula_popoluca_dutch_xlm` is a English model originally trained by pranaydeeps. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lettuce_sayula_popoluca_dutch_xlm_en_5.5.0_3.0_1725928016954.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lettuce_sayula_popoluca_dutch_xlm_en_5.5.0_3.0_1725928016954.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("lettuce_sayula_popoluca_dutch_xlm","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("lettuce_sayula_popoluca_dutch_xlm", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lettuce_sayula_popoluca_dutch_xlm| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|864.5 MB| + +## References + +https://huggingface.co/pranaydeeps/lettuce_pos_nl_xlm \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-lettuce_sayula_popoluca_dutch_xlm_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-lettuce_sayula_popoluca_dutch_xlm_pipeline_en.md new file mode 100644 index 00000000000000..e17ed51b378fb7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-lettuce_sayula_popoluca_dutch_xlm_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English lettuce_sayula_popoluca_dutch_xlm_pipeline pipeline XlmRoBertaForTokenClassification from pranaydeeps +author: John Snow Labs +name: lettuce_sayula_popoluca_dutch_xlm_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lettuce_sayula_popoluca_dutch_xlm_pipeline` is a English model originally trained by pranaydeeps. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lettuce_sayula_popoluca_dutch_xlm_pipeline_en_5.5.0_3.0_1725928079073.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lettuce_sayula_popoluca_dutch_xlm_pipeline_en_5.5.0_3.0_1725928079073.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("lettuce_sayula_popoluca_dutch_xlm_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("lettuce_sayula_popoluca_dutch_xlm_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lettuce_sayula_popoluca_dutch_xlm_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|864.5 MB| + +## References + +https://huggingface.co/pranaydeeps/lettuce_pos_nl_xlm + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-lm6_movie_aspect_extraction_bert_en.md b/docs/_posts/ahmedlone127/2024-09-10-lm6_movie_aspect_extraction_bert_en.md new file mode 100644 index 00000000000000..a1aeb7cf61e6ab --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-lm6_movie_aspect_extraction_bert_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English lm6_movie_aspect_extraction_bert BertForSequenceClassification from Lowerated +author: John Snow Labs +name: lm6_movie_aspect_extraction_bert +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, bert] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`lm6_movie_aspect_extraction_bert` is a English model originally trained by Lowerated. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/lm6_movie_aspect_extraction_bert_en_5.5.0_3.0_1725957853549.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/lm6_movie_aspect_extraction_bert_en_5.5.0_3.0_1725957853549.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = BertForSequenceClassification.pretrained("lm6_movie_aspect_extraction_bert","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = BertForSequenceClassification.pretrained("lm6_movie_aspect_extraction_bert", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|lm6_movie_aspect_extraction_bert| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|409.4 MB| + +## References + +https://huggingface.co/Lowerated/lm6-movie-aspect-extraction-bert \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-masked_language_model_nikhilwani_en.md b/docs/_posts/ahmedlone127/2024-09-10-masked_language_model_nikhilwani_en.md new file mode 100644 index 00000000000000..f350ff92d2f47c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-masked_language_model_nikhilwani_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English masked_language_model_nikhilwani RoBertaEmbeddings from nikhilwani +author: John Snow Labs +name: masked_language_model_nikhilwani +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`masked_language_model_nikhilwani` is a English model originally trained by nikhilwani. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/masked_language_model_nikhilwani_en_5.5.0_3.0_1725937793934.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/masked_language_model_nikhilwani_en_5.5.0_3.0_1725937793934.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("masked_language_model_nikhilwani","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("masked_language_model_nikhilwani","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|masked_language_model_nikhilwani| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|306.5 MB| + +## References + +https://huggingface.co/nikhilwani/masked-language-model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-mbert_hindi_bengali_mlm_squad_tydi_mlqa_pipeline_hi.md b/docs/_posts/ahmedlone127/2024-09-10-mbert_hindi_bengali_mlm_squad_tydi_mlqa_pipeline_hi.md new file mode 100644 index 00000000000000..2321156a291a27 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-mbert_hindi_bengali_mlm_squad_tydi_mlqa_pipeline_hi.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Hindi mbert_hindi_bengali_mlm_squad_tydi_mlqa_pipeline pipeline BertForQuestionAnswering from hapandya +author: John Snow Labs +name: mbert_hindi_bengali_mlm_squad_tydi_mlqa_pipeline +date: 2024-09-10 +tags: [hi, open_source, pipeline, onnx] +task: Question Answering +language: hi +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mbert_hindi_bengali_mlm_squad_tydi_mlqa_pipeline` is a Hindi model originally trained by hapandya. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mbert_hindi_bengali_mlm_squad_tydi_mlqa_pipeline_hi_5.5.0_3.0_1725927413988.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mbert_hindi_bengali_mlm_squad_tydi_mlqa_pipeline_hi_5.5.0_3.0_1725927413988.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("mbert_hindi_bengali_mlm_squad_tydi_mlqa_pipeline", lang = "hi") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("mbert_hindi_bengali_mlm_squad_tydi_mlqa_pipeline", lang = "hi") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mbert_hindi_bengali_mlm_squad_tydi_mlqa_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|hi| +|Size:|663.7 MB| + +## References + +https://huggingface.co/hapandya/mBERT-hi-bn-MLM-SQuAD-TyDi-MLQA + +## Included Models + +- MultiDocumentAssembler +- BertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-mbert_quran_qa_en.md b/docs/_posts/ahmedlone127/2024-09-10-mbert_quran_qa_en.md new file mode 100644 index 00000000000000..acba1ccea0b7c4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-mbert_quran_qa_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English mbert_quran_qa BertForQuestionAnswering from NeginShams +author: John Snow Labs +name: mbert_quran_qa +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, bert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mbert_quran_qa` is a English model originally trained by NeginShams. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mbert_quran_qa_en_5.5.0_3.0_1725926890035.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mbert_quran_qa_en_5.5.0_3.0_1725926890035.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = BertForQuestionAnswering.pretrained("mbert_quran_qa","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = BertForQuestionAnswering.pretrained("mbert_quran_qa", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mbert_quran_qa| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|625.6 MB| + +## References + +https://huggingface.co/NeginShams/mbert-Quran_QA \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-mbert_quran_qa_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-mbert_quran_qa_pipeline_en.md new file mode 100644 index 00000000000000..8180408023fb30 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-mbert_quran_qa_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English mbert_quran_qa_pipeline pipeline BertForQuestionAnswering from NeginShams +author: John Snow Labs +name: mbert_quran_qa_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mbert_quran_qa_pipeline` is a English model originally trained by NeginShams. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mbert_quran_qa_pipeline_en_5.5.0_3.0_1725926919387.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mbert_quran_qa_pipeline_en_5.5.0_3.0_1725926919387.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("mbert_quran_qa_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("mbert_quran_qa_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mbert_quran_qa_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|625.6 MB| + +## References + +https://huggingface.co/NeginShams/mbert-Quran_QA + +## Included Models + +- MultiDocumentAssembler +- BertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-medical_small_english_1_1v_en.md b/docs/_posts/ahmedlone127/2024-09-10-medical_small_english_1_1v_en.md new file mode 100644 index 00000000000000..81fb24a3d43cd0 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-medical_small_english_1_1v_en.md @@ -0,0 +1,84 @@ +--- +layout: model +title: English medical_small_english_1_1v WhisperForCTC from Dev372 +author: John Snow Labs +name: medical_small_english_1_1v +date: 2024-09-10 +tags: [en, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`medical_small_english_1_1v` is a English model originally trained by Dev372. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/medical_small_english_1_1v_en_5.5.0_3.0_1725941999531.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/medical_small_english_1_1v_en_5.5.0_3.0_1725941999531.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("medical_small_english_1_1v","en") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("medical_small_english_1_1v", "en") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|medical_small_english_1_1v| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|en| +|Size:|1.7 GB| + +## References + +https://huggingface.co/Dev372/Medical_small_en_1_1v \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-medical_small_english_1_1v_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-medical_small_english_1_1v_pipeline_en.md new file mode 100644 index 00000000000000..75cc23c140d28b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-medical_small_english_1_1v_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English medical_small_english_1_1v_pipeline pipeline WhisperForCTC from Dev372 +author: John Snow Labs +name: medical_small_english_1_1v_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`medical_small_english_1_1v_pipeline` is a English model originally trained by Dev372. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/medical_small_english_1_1v_pipeline_en_5.5.0_3.0_1725942082007.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/medical_small_english_1_1v_pipeline_en_5.5.0_3.0_1725942082007.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("medical_small_english_1_1v_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("medical_small_english_1_1v_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|medical_small_english_1_1v_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.7 GB| + +## References + +https://huggingface.co/Dev372/Medical_small_en_1_1v + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-mini_phobert_v2_2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-mini_phobert_v2_2_pipeline_en.md new file mode 100644 index 00000000000000..2486b84b333813 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-mini_phobert_v2_2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English mini_phobert_v2_2_pipeline pipeline RoBertaEmbeddings from keepitreal +author: John Snow Labs +name: mini_phobert_v2_2_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mini_phobert_v2_2_pipeline` is a English model originally trained by keepitreal. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mini_phobert_v2_2_pipeline_en_5.5.0_3.0_1725937250439.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mini_phobert_v2_2_pipeline_en_5.5.0_3.0_1725937250439.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("mini_phobert_v2_2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("mini_phobert_v2_2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mini_phobert_v2_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|77.1 MB| + +## References + +https://huggingface.co/keepitreal/mini-phobert-v2.2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-mpnet_base_nli_v2_en.md b/docs/_posts/ahmedlone127/2024-09-10-mpnet_base_nli_v2_en.md new file mode 100644 index 00000000000000..a172427c5fdf88 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-mpnet_base_nli_v2_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English mpnet_base_nli_v2 MPNetEmbeddings from manuel-couto-pintos +author: John Snow Labs +name: mpnet_base_nli_v2 +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mpnet_base_nli_v2` is a English model originally trained by manuel-couto-pintos. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mpnet_base_nli_v2_en_5.5.0_3.0_1725936224276.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mpnet_base_nli_v2_en_5.5.0_3.0_1725936224276.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("mpnet_base_nli_v2","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("mpnet_base_nli_v2","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mpnet_base_nli_v2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|379.3 MB| + +## References + +https://huggingface.co/manuel-couto-pintos/mpnet-base-nli-v2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_en.md b/docs/_posts/ahmedlone127/2024-09-10-mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_en.md new file mode 100644 index 00000000000000..bb28727209a86c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5 MPNetEmbeddings from luiz-and-robert-thesis +author: John Snow Labs +name: mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5 +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5` is a English model originally trained by luiz-and-robert-thesis. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_en_5.5.0_3.0_1725964126237.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_en_5.5.0_3.0_1725964126237.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|406.7 MB| + +## References + +https://huggingface.co/luiz-and-robert-thesis/mpnet-frozen-newtriplets-lr-2e-7-m-1-e-5 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_pipeline_en.md new file mode 100644 index 00000000000000..170527e88a271f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_pipeline pipeline MPNetEmbeddings from luiz-and-robert-thesis +author: John Snow Labs +name: mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_pipeline` is a English model originally trained by luiz-and-robert-thesis. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_pipeline_en_5.5.0_3.0_1725964147619.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_pipeline_en_5.5.0_3.0_1725964147619.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|mpnet_frozen_newtriplets_lr_2e_7_m_1_e_5_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.7 MB| + +## References + +https://huggingface.co/luiz-and-robert-thesis/mpnet-frozen-newtriplets-lr-2e-7-m-1-e-5 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-msmarco_distilbert_word2vec256k_mlm_230k_en.md b/docs/_posts/ahmedlone127/2024-09-10-msmarco_distilbert_word2vec256k_mlm_230k_en.md new file mode 100644 index 00000000000000..e3ec43f78dfc35 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-msmarco_distilbert_word2vec256k_mlm_230k_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English msmarco_distilbert_word2vec256k_mlm_230k DistilBertEmbeddings from vocab-transformers +author: John Snow Labs +name: msmarco_distilbert_word2vec256k_mlm_230k +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, distilbert] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`msmarco_distilbert_word2vec256k_mlm_230k` is a English model originally trained by vocab-transformers. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/msmarco_distilbert_word2vec256k_mlm_230k_en_5.5.0_3.0_1725946814967.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/msmarco_distilbert_word2vec256k_mlm_230k_en_5.5.0_3.0_1725946814967.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = DistilBertEmbeddings.pretrained("msmarco_distilbert_word2vec256k_mlm_230k","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = DistilBertEmbeddings.pretrained("msmarco_distilbert_word2vec256k_mlm_230k","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|msmarco_distilbert_word2vec256k_mlm_230k| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[distilbert]| +|Language:|en| +|Size:|885.6 MB| + +## References + +https://huggingface.co/vocab-transformers/msmarco-distilbert-word2vec256k-MLM_230k \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-msmarco_distilbert_word2vec256k_mlm_230k_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-msmarco_distilbert_word2vec256k_mlm_230k_pipeline_en.md new file mode 100644 index 00000000000000..16f963977f0da1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-msmarco_distilbert_word2vec256k_mlm_230k_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English msmarco_distilbert_word2vec256k_mlm_230k_pipeline pipeline DistilBertEmbeddings from vocab-transformers +author: John Snow Labs +name: msmarco_distilbert_word2vec256k_mlm_230k_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`msmarco_distilbert_word2vec256k_mlm_230k_pipeline` is a English model originally trained by vocab-transformers. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/msmarco_distilbert_word2vec256k_mlm_230k_pipeline_en_5.5.0_3.0_1725946854491.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/msmarco_distilbert_word2vec256k_mlm_230k_pipeline_en_5.5.0_3.0_1725946854491.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("msmarco_distilbert_word2vec256k_mlm_230k_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("msmarco_distilbert_word2vec256k_mlm_230k_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|msmarco_distilbert_word2vec256k_mlm_230k_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|885.6 MB| + +## References + +https://huggingface.co/vocab-transformers/msmarco-distilbert-word2vec256k-MLM_230k + +## Included Models + +- DocumentAssembler +- TokenizerModel +- DistilBertEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-multi_sbert_v2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-multi_sbert_v2_pipeline_en.md new file mode 100644 index 00000000000000..4ee5c47e4bee4c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-multi_sbert_v2_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English multi_sbert_v2_pipeline pipeline MPNetEmbeddings from Gnartiel +author: John Snow Labs +name: multi_sbert_v2_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`multi_sbert_v2_pipeline` is a English model originally trained by Gnartiel. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/multi_sbert_v2_pipeline_en_5.5.0_3.0_1725964021012.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/multi_sbert_v2_pipeline_en_5.5.0_3.0_1725964021012.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("multi_sbert_v2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("multi_sbert_v2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|multi_sbert_v2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.8 MB| + +## References + +https://huggingface.co/Gnartiel/multi-sbert-v2 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-nace2_level1_29_en.md b/docs/_posts/ahmedlone127/2024-09-10-nace2_level1_29_en.md new file mode 100644 index 00000000000000..03937ea7b9d0d3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-nace2_level1_29_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English nace2_level1_29 RoBertaForSequenceClassification from intelcomp +author: John Snow Labs +name: nace2_level1_29 +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`nace2_level1_29` is a English model originally trained by intelcomp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/nace2_level1_29_en_5.5.0_3.0_1725965707548.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/nace2_level1_29_en_5.5.0_3.0_1725965707548.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("nace2_level1_29","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("nace2_level1_29", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|nace2_level1_29| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/intelcomp/nace2_level1_29 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-nace2_level1_29_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-nace2_level1_29_pipeline_en.md new file mode 100644 index 00000000000000..88c59227066e76 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-nace2_level1_29_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English nace2_level1_29_pipeline pipeline RoBertaForSequenceClassification from intelcomp +author: John Snow Labs +name: nace2_level1_29_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`nace2_level1_29_pipeline` is a English model originally trained by intelcomp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/nace2_level1_29_pipeline_en_5.5.0_3.0_1725965780569.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/nace2_level1_29_pipeline_en_5.5.0_3.0_1725965780569.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("nace2_level1_29_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("nace2_level1_29_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|nace2_level1_29_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/intelcomp/nace2_level1_29 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-nietzschellm_en.md b/docs/_posts/ahmedlone127/2024-09-10-nietzschellm_en.md new file mode 100644 index 00000000000000..c341b2ce9b395d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-nietzschellm_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English nietzschellm RoBertaEmbeddings from ferrarimarlon +author: John Snow Labs +name: nietzschellm +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`nietzschellm` is a English model originally trained by ferrarimarlon. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/nietzschellm_en_5.5.0_3.0_1725930499630.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/nietzschellm_en_5.5.0_3.0_1725930499630.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("nietzschellm","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("nietzschellm","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|nietzschellm| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|310.3 MB| + +## References + +https://huggingface.co/ferrarimarlon/nietzscheLLM \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-nlp_team_binarytoxicityclassifierforevaluationpurpose_en.md b/docs/_posts/ahmedlone127/2024-09-10-nlp_team_binarytoxicityclassifierforevaluationpurpose_en.md new file mode 100644 index 00000000000000..9e490f7d5670b9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-nlp_team_binarytoxicityclassifierforevaluationpurpose_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English nlp_team_binarytoxicityclassifierforevaluationpurpose RoBertaForSequenceClassification from naman632 +author: John Snow Labs +name: nlp_team_binarytoxicityclassifierforevaluationpurpose +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`nlp_team_binarytoxicityclassifierforevaluationpurpose` is a English model originally trained by naman632. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/nlp_team_binarytoxicityclassifierforevaluationpurpose_en_5.5.0_3.0_1725965240357.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/nlp_team_binarytoxicityclassifierforevaluationpurpose_en_5.5.0_3.0_1725965240357.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("nlp_team_binarytoxicityclassifierforevaluationpurpose","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("nlp_team_binarytoxicityclassifierforevaluationpurpose", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|nlp_team_binarytoxicityclassifierforevaluationpurpose| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|467.9 MB| + +## References + +https://huggingface.co/naman632/NLP_team_binaryToxicityClassifierForEvaluationPurpose \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_en.md b/docs/_posts/ahmedlone127/2024-09-10-output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_en.md new file mode 100644 index 00000000000000..fbbeb79932127f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English output_mask_step_pretraining_plus_contr_roberta_large_epochs_1 RoBertaForQuestionAnswering from AnonymousSub +author: John Snow Labs +name: output_mask_step_pretraining_plus_contr_roberta_large_epochs_1 +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`output_mask_step_pretraining_plus_contr_roberta_large_epochs_1` is a English model originally trained by AnonymousSub. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_en_5.5.0_3.0_1725959264059.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_en_5.5.0_3.0_1725959264059.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("output_mask_step_pretraining_plus_contr_roberta_large_epochs_1","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("output_mask_step_pretraining_plus_contr_roberta_large_epochs_1", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|output_mask_step_pretraining_plus_contr_roberta_large_epochs_1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/AnonymousSub/output_mask_step_pretraining_plus_contr_roberta-large_EPOCHS_1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_pipeline_en.md new file mode 100644 index 00000000000000..ff20bb0a1dd9bc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_pipeline pipeline RoBertaForQuestionAnswering from AnonymousSub +author: John Snow Labs +name: output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_pipeline` is a English model originally trained by AnonymousSub. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_pipeline_en_5.5.0_3.0_1725959338529.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_pipeline_en_5.5.0_3.0_1725959338529.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|output_mask_step_pretraining_plus_contr_roberta_large_epochs_1_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/AnonymousSub/output_mask_step_pretraining_plus_contr_roberta-large_EPOCHS_1 + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-parameter_psb_en.md b/docs/_posts/ahmedlone127/2024-09-10-parameter_psb_en.md new file mode 100644 index 00000000000000..7cbb8fc89e0d9d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-parameter_psb_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English parameter_psb MPNetEmbeddings from nategro +author: John Snow Labs +name: parameter_psb +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`parameter_psb` is a English model originally trained by nategro. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/parameter_psb_en_5.5.0_3.0_1725963422452.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/parameter_psb_en_5.5.0_3.0_1725963422452.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("parameter_psb","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("parameter_psb","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|parameter_psb| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/nategro/parameter-psb \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-platzi_distilroberta_base_mrpc_glue_santirest_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-platzi_distilroberta_base_mrpc_glue_santirest_pipeline_en.md new file mode 100644 index 00000000000000..55a62a6798abb4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-platzi_distilroberta_base_mrpc_glue_santirest_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English platzi_distilroberta_base_mrpc_glue_santirest_pipeline pipeline RoBertaForSequenceClassification from platzi +author: John Snow Labs +name: platzi_distilroberta_base_mrpc_glue_santirest_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`platzi_distilroberta_base_mrpc_glue_santirest_pipeline` is a English model originally trained by platzi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/platzi_distilroberta_base_mrpc_glue_santirest_pipeline_en_5.5.0_3.0_1725965960707.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/platzi_distilroberta_base_mrpc_glue_santirest_pipeline_en_5.5.0_3.0_1725965960707.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("platzi_distilroberta_base_mrpc_glue_santirest_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("platzi_distilroberta_base_mrpc_glue_santirest_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|platzi_distilroberta_base_mrpc_glue_santirest_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|308.6 MB| + +## References + +https://huggingface.co/platzi/platzi-distilroberta-base-mrpc-glue-santirest + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-q2d_ep3_1234_en.md b/docs/_posts/ahmedlone127/2024-09-10-q2d_ep3_1234_en.md new file mode 100644 index 00000000000000..faac77d02cb845 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-q2d_ep3_1234_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English q2d_ep3_1234 MPNetEmbeddings from ingeol +author: John Snow Labs +name: q2d_ep3_1234 +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, mpnet] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: MPNetEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`q2d_ep3_1234` is a English model originally trained by ingeol. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/q2d_ep3_1234_en_5.5.0_3.0_1725964065438.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/q2d_ep3_1234_en_5.5.0_3.0_1725964065438.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +embeddings = MPNetEmbeddings.pretrained("q2d_ep3_1234","en") \ + .setInputCols(["document"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val embeddings = MPNetEmbeddings.pretrained("q2d_ep3_1234","en") + .setInputCols(Array("document")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|q2d_ep3_1234| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document]| +|Output Labels:|[mpnet]| +|Language:|en| +|Size:|407.1 MB| + +## References + +https://huggingface.co/ingeol/q2d_ep3_1234 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-qa_with_squad_en.md b/docs/_posts/ahmedlone127/2024-09-10-qa_with_squad_en.md new file mode 100644 index 00000000000000..84bc8d6f8c6b12 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-qa_with_squad_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English qa_with_squad DistilBertForQuestionAnswering from lazaroq11 +author: John Snow Labs +name: qa_with_squad +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`qa_with_squad` is a English model originally trained by lazaroq11. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/qa_with_squad_en_5.5.0_3.0_1725932124786.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/qa_with_squad_en_5.5.0_3.0_1725932124786.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("qa_with_squad","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("qa_with_squad", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|qa_with_squad| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/lazaroq11/qa_with_squad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-regr_3_en.md b/docs/_posts/ahmedlone127/2024-09-10-regr_3_en.md new file mode 100644 index 00000000000000..9d1428dcd59928 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-regr_3_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English regr_3 RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: regr_3 +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`regr_3` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/regr_3_en_5.5.0_3.0_1725965487603.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/regr_3_en_5.5.0_3.0_1725965487603.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("regr_3","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("regr_3", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|regr_3| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/Regr_3 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-results_teng0929_en.md b/docs/_posts/ahmedlone127/2024-09-10-results_teng0929_en.md new file mode 100644 index 00000000000000..5cfd0aefe769d5 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-results_teng0929_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English results_teng0929 RoBertaForQuestionAnswering from teng0929 +author: John Snow Labs +name: results_teng0929 +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`results_teng0929` is a English model originally trained by teng0929. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/results_teng0929_en_5.5.0_3.0_1725959412042.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/results_teng0929_en_5.5.0_3.0_1725959412042.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("results_teng0929","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("results_teng0929", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|results_teng0929| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|465.1 MB| + +## References + +https://huggingface.co/teng0929/results \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-rmse_4_en.md b/docs/_posts/ahmedlone127/2024-09-10-rmse_4_en.md new file mode 100644 index 00000000000000..4523d7ed732b44 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-rmse_4_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English rmse_4 RoBertaForSequenceClassification from BaronSch +author: John Snow Labs +name: rmse_4 +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`rmse_4` is a English model originally trained by BaronSch. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/rmse_4_en_5.5.0_3.0_1725962042060.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/rmse_4_en_5.5.0_3.0_1725962042060.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("rmse_4","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("rmse_4", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|rmse_4| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.5 MB| + +## References + +https://huggingface.co/BaronSch/RMSE_4 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_base_100m_2_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_100m_2_en.md new file mode 100644 index 00000000000000..441cbc57e06468 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_100m_2_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_100m_2 RoBertaEmbeddings from nyu-mll +author: John Snow Labs +name: roberta_base_100m_2 +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_100m_2` is a English model originally trained by nyu-mll. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_100m_2_en_5.5.0_3.0_1725931386979.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_100m_2_en_5.5.0_3.0_1725931386979.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("roberta_base_100m_2","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("roberta_base_100m_2","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_100m_2| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|296.1 MB| + +## References + +https://huggingface.co/nyu-mll/roberta-base-100M-2 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_base_10m_2_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_10m_2_pipeline_en.md new file mode 100644 index 00000000000000..64fae8913039a8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_10m_2_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_10m_2_pipeline pipeline RoBertaEmbeddings from nyu-mll +author: John Snow Labs +name: roberta_base_10m_2_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_10m_2_pipeline` is a English model originally trained by nyu-mll. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_10m_2_pipeline_en_5.5.0_3.0_1725931136524.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_10m_2_pipeline_en_5.5.0_3.0_1725931136524.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_10m_2_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_10m_2_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_10m_2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|296.2 MB| + +## References + +https://huggingface.co/nyu-mll/roberta-base-10M-2 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_base_finetuned_squad_nlp_course_chapter7_section6_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_finetuned_squad_nlp_course_chapter7_section6_pipeline_en.md new file mode 100644 index 00000000000000..a5ccc07609194b --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_finetuned_squad_nlp_course_chapter7_section6_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English roberta_base_finetuned_squad_nlp_course_chapter7_section6_pipeline pipeline RoBertaForQuestionAnswering from BanUrsus +author: John Snow Labs +name: roberta_base_finetuned_squad_nlp_course_chapter7_section6_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_finetuned_squad_nlp_course_chapter7_section6_pipeline` is a English model originally trained by BanUrsus. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_squad_nlp_course_chapter7_section6_pipeline_en_5.5.0_3.0_1725958980432.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_squad_nlp_course_chapter7_section6_pipeline_en_5.5.0_3.0_1725958980432.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_finetuned_squad_nlp_course_chapter7_section6_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_finetuned_squad_nlp_course_chapter7_section6_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_finetuned_squad_nlp_course_chapter7_section6_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|461.9 MB| + +## References + +https://huggingface.co/BanUrsus/roberta-base-finetuned-squad_nlp-course-chapter7-section6 + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_base_finetuned_squad_v2_hcy5561_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_finetuned_squad_v2_hcy5561_pipeline_en.md new file mode 100644 index 00000000000000..4c44908c5e8165 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_finetuned_squad_v2_hcy5561_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English roberta_base_finetuned_squad_v2_hcy5561_pipeline pipeline RoBertaForQuestionAnswering from hcy5561 +author: John Snow Labs +name: roberta_base_finetuned_squad_v2_hcy5561_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_finetuned_squad_v2_hcy5561_pipeline` is a English model originally trained by hcy5561. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_squad_v2_hcy5561_pipeline_en_5.5.0_3.0_1725958789314.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_squad_v2_hcy5561_pipeline_en_5.5.0_3.0_1725958789314.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_finetuned_squad_v2_hcy5561_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_finetuned_squad_v2_hcy5561_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_finetuned_squad_v2_hcy5561_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|462.0 MB| + +## References + +https://huggingface.co/hcy5561/roberta-base-finetuned-squad_v2 + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_base_finetuned_wallisian_whisper_8ep_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_finetuned_wallisian_whisper_8ep_pipeline_en.md new file mode 100644 index 00000000000000..3abbcff8618f7f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_finetuned_wallisian_whisper_8ep_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_finetuned_wallisian_whisper_8ep_pipeline pipeline RoBertaEmbeddings from btamm12 +author: John Snow Labs +name: roberta_base_finetuned_wallisian_whisper_8ep_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_finetuned_wallisian_whisper_8ep_pipeline` is a English model originally trained by btamm12. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_wallisian_whisper_8ep_pipeline_en_5.5.0_3.0_1725937487398.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_finetuned_wallisian_whisper_8ep_pipeline_en_5.5.0_3.0_1725937487398.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_finetuned_wallisian_whisper_8ep_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_finetuned_wallisian_whisper_8ep_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_finetuned_wallisian_whisper_8ep_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|466.0 MB| + +## References + +https://huggingface.co/btamm12/roberta-base-finetuned-wls-whisper-8ep + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_base_imdb_trained_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_imdb_trained_en.md new file mode 100644 index 00000000000000..80aab4bbd77ab3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_imdb_trained_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_imdb_trained RoBertaForSequenceClassification from JakobKaiser +author: John Snow Labs +name: roberta_base_imdb_trained +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_imdb_trained` is a English model originally trained by JakobKaiser. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_imdb_trained_en_5.5.0_3.0_1725962177976.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_imdb_trained_en_5.5.0_3.0_1725962177976.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_imdb_trained","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_imdb_trained", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_imdb_trained| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|463.0 MB| + +## References + +https://huggingface.co/JakobKaiser/roberta-base-imdb-trained \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_base_last_char_acl2023_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_last_char_acl2023_en.md new file mode 100644 index 00000000000000..b3916fd6d75c3e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_last_char_acl2023_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_last_char_acl2023 RoBertaEmbeddings from hitachi-nlp +author: John Snow Labs +name: roberta_base_last_char_acl2023 +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_last_char_acl2023` is a English model originally trained by hitachi-nlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_last_char_acl2023_en_5.5.0_3.0_1725931202016.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_last_char_acl2023_en_5.5.0_3.0_1725931202016.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("roberta_base_last_char_acl2023","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("roberta_base_last_char_acl2023","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_last_char_acl2023| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|465.9 MB| + +## References + +https://huggingface.co/hitachi-nlp/roberta-base_last-char_acl2023 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_base_last_char_acl2023_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_last_char_acl2023_pipeline_en.md new file mode 100644 index 00000000000000..1505ab4f57a881 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_last_char_acl2023_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_base_last_char_acl2023_pipeline pipeline RoBertaEmbeddings from hitachi-nlp +author: John Snow Labs +name: roberta_base_last_char_acl2023_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_last_char_acl2023_pipeline` is a English model originally trained by hitachi-nlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_last_char_acl2023_pipeline_en_5.5.0_3.0_1725931227004.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_last_char_acl2023_pipeline_en_5.5.0_3.0_1725931227004.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_base_last_char_acl2023_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_base_last_char_acl2023_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_last_char_acl2023_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|465.9 MB| + +## References + +https://huggingface.co/hitachi-nlp/roberta-base_last-char_acl2023 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_base_roberta_model_enyonam_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_roberta_model_enyonam_en.md new file mode 100644 index 00000000000000..240dd9d060c9f3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_base_roberta_model_enyonam_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_base_roberta_model_enyonam RoBertaForSequenceClassification from Enyonam +author: John Snow Labs +name: roberta_base_roberta_model_enyonam +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_base_roberta_model_enyonam` is a English model originally trained by Enyonam. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_base_roberta_model_enyonam_en_5.5.0_3.0_1725962592596.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_base_roberta_model_enyonam_en_5.5.0_3.0_1725962592596.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_roberta_model_enyonam","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_base_roberta_model_enyonam", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_base_roberta_model_enyonam| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|424.6 MB| + +## References + +https://huggingface.co/Enyonam/roberta-base-Roberta-Model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_finetuned_medquad_3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_finetuned_medquad_3_pipeline_en.md new file mode 100644 index 00000000000000..4fa473edeb06ae --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_finetuned_medquad_3_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English roberta_finetuned_medquad_3_pipeline pipeline RoBertaForQuestionAnswering from DataScientist1122 +author: John Snow Labs +name: roberta_finetuned_medquad_3_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_finetuned_medquad_3_pipeline` is a English model originally trained by DataScientist1122. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_finetuned_medquad_3_pipeline_en_5.5.0_3.0_1725959160403.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_finetuned_medquad_3_pipeline_en_5.5.0_3.0_1725959160403.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_finetuned_medquad_3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_finetuned_medquad_3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_finetuned_medquad_3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|443.8 MB| + +## References + +https://huggingface.co/DataScientist1122/roberta-finetuned-medquad_3 + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_finetuned_qa_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_finetuned_qa_pipeline_en.md new file mode 100644 index 00000000000000..1a945c85bd664c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_finetuned_qa_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English roberta_finetuned_qa_pipeline pipeline RoBertaForQuestionAnswering from malizade +author: John Snow Labs +name: roberta_finetuned_qa_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_finetuned_qa_pipeline` is a English model originally trained by malizade. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_finetuned_qa_pipeline_en_5.5.0_3.0_1725958623358.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_finetuned_qa_pipeline_en_5.5.0_3.0_1725958623358.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_finetuned_qa_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_finetuned_qa_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_finetuned_qa_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|463.6 MB| + +## References + +https://huggingface.co/malizade/roberta-finetuned-QA + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_finetuned_subjqa_movies_2_ethegem_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_finetuned_subjqa_movies_2_ethegem_pipeline_en.md new file mode 100644 index 00000000000000..e5ad20b5825657 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_finetuned_subjqa_movies_2_ethegem_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English roberta_finetuned_subjqa_movies_2_ethegem_pipeline pipeline RoBertaForQuestionAnswering from Ethegem +author: John Snow Labs +name: roberta_finetuned_subjqa_movies_2_ethegem_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_finetuned_subjqa_movies_2_ethegem_pipeline` is a English model originally trained by Ethegem. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_finetuned_subjqa_movies_2_ethegem_pipeline_en_5.5.0_3.0_1725958490216.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_finetuned_subjqa_movies_2_ethegem_pipeline_en_5.5.0_3.0_1725958490216.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_finetuned_subjqa_movies_2_ethegem_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_finetuned_subjqa_movies_2_ethegem_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_finetuned_subjqa_movies_2_ethegem_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|464.1 MB| + +## References + +https://huggingface.co/Ethegem/roberta-finetuned-subjqa-movies_2 + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_finetuned_vitaminc_50k_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_finetuned_vitaminc_50k_en.md new file mode 100644 index 00000000000000..8f4886f1f54c7a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_finetuned_vitaminc_50k_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_finetuned_vitaminc_50k RoBertaForSequenceClassification from kamileyagci +author: John Snow Labs +name: roberta_finetuned_vitaminc_50k +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_finetuned_vitaminc_50k` is a English model originally trained by kamileyagci. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_finetuned_vitaminc_50k_en_5.5.0_3.0_1725962901685.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_finetuned_vitaminc_50k_en_5.5.0_3.0_1725962901685.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_finetuned_vitaminc_50k","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("roberta_finetuned_vitaminc_50k", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_finetuned_vitaminc_50k| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/kamileyagci/roberta-finetuned_vitaminc_50K \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_large_pile_lr2e_5_bs16_8gpu_1700000_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_large_pile_lr2e_5_bs16_8gpu_1700000_en.md new file mode 100644 index 00000000000000..c8fe5101b9f9ff --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_large_pile_lr2e_5_bs16_8gpu_1700000_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English roberta_large_pile_lr2e_5_bs16_8gpu_1700000 RoBertaEmbeddings from socialfoundations +author: John Snow Labs +name: roberta_large_pile_lr2e_5_bs16_8gpu_1700000 +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_large_pile_lr2e_5_bs16_8gpu_1700000` is a English model originally trained by socialfoundations. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_large_pile_lr2e_5_bs16_8gpu_1700000_en_5.5.0_3.0_1725931456982.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_large_pile_lr2e_5_bs16_8gpu_1700000_en_5.5.0_3.0_1725931456982.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("roberta_large_pile_lr2e_5_bs16_8gpu_1700000","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("roberta_large_pile_lr2e_5_bs16_8gpu_1700000","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_large_pile_lr2e_5_bs16_8gpu_1700000| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/socialfoundations/roberta-large-pile-lr2e-5-bs16-8gpu-1700000 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_reman_tec_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_reman_tec_pipeline_en.md new file mode 100644 index 00000000000000..882a0e913da933 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_reman_tec_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_reman_tec_pipeline pipeline RoBertaForSequenceClassification from gustavecortal +author: John Snow Labs +name: roberta_reman_tec_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_reman_tec_pipeline` is a English model originally trained by gustavecortal. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_reman_tec_pipeline_en_5.5.0_3.0_1725965411464.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_reman_tec_pipeline_en_5.5.0_3.0_1725965411464.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_reman_tec_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_reman_tec_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_reman_tec_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.1 MB| + +## References + +https://huggingface.co/gustavecortal/roberta-reman-tec + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_stance_compqa_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_stance_compqa_pipeline_en.md new file mode 100644 index 00000000000000..74ec6e959b75fd --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_stance_compqa_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English roberta_stance_compqa_pipeline pipeline RoBertaForSequenceClassification from lilaspourpre +author: John Snow Labs +name: roberta_stance_compqa_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_stance_compqa_pipeline` is a English model originally trained by lilaspourpre. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_stance_compqa_pipeline_en_5.5.0_3.0_1725965460011.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_stance_compqa_pipeline_en_5.5.0_3.0_1725965460011.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_stance_compqa_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_stance_compqa_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_stance_compqa_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/lilaspourpre/roberta-stance-compqa + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_tajik_tg.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_tajik_tg.md new file mode 100644 index 00000000000000..b50c45a33d1a30 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_tajik_tg.md @@ -0,0 +1,94 @@ +--- +layout: model +title: Tajik roberta_tajik RoBertaEmbeddings from muhtasham +author: John Snow Labs +name: roberta_tajik +date: 2024-09-10 +tags: [tg, open_source, onnx, embeddings, roberta] +task: Embeddings +language: tg +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_tajik` is a Tajik model originally trained by muhtasham. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_tajik_tg_5.5.0_3.0_1725930615944.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_tajik_tg_5.5.0_3.0_1725930615944.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("roberta_tajik","tg") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("roberta_tajik","tg") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_tajik| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|tg| +|Size:|311.7 MB| + +## References + +https://huggingface.co/muhtasham/RoBERTa-tg \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-roberta_updated_model_02b_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-roberta_updated_model_02b_pipeline_en.md new file mode 100644 index 00000000000000..55ac26a55d7521 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-roberta_updated_model_02b_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English roberta_updated_model_02b_pipeline pipeline RoBertaForQuestionAnswering from Naima12 +author: John Snow Labs +name: roberta_updated_model_02b_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`roberta_updated_model_02b_pipeline` is a English model originally trained by Naima12. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_updated_model_02b_pipeline_en_5.5.0_3.0_1725958629827.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_updated_model_02b_pipeline_en_5.5.0_3.0_1725958629827.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("roberta_updated_model_02b_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("roberta_updated_model_02b_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|roberta_updated_model_02b_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|422.6 MB| + +## References + +https://huggingface.co/Naima12/RoBERTa-Updated-Model_02B + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-robertita_cased_finetuned_squad_en.md b/docs/_posts/ahmedlone127/2024-09-10-robertita_cased_finetuned_squad_en.md new file mode 100644 index 00000000000000..917207d66f41dc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-robertita_cased_finetuned_squad_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English robertita_cased_finetuned_squad RoBertaForQuestionAnswering from luischir +author: John Snow Labs +name: robertita_cased_finetuned_squad +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, roberta] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`robertita_cased_finetuned_squad` is a English model originally trained by luischir. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/robertita_cased_finetuned_squad_en_5.5.0_3.0_1725958907487.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/robertita_cased_finetuned_squad_en_5.5.0_3.0_1725958907487.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = RoBertaForQuestionAnswering.pretrained("robertita_cased_finetuned_squad","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = RoBertaForQuestionAnswering.pretrained("robertita_cased_finetuned_squad", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|robertita_cased_finetuned_squad| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|407.2 MB| + +## References + +https://huggingface.co/luischir/robertita-cased-finetuned-squad \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-sean_question_answering_model_en.md b/docs/_posts/ahmedlone127/2024-09-10-sean_question_answering_model_en.md new file mode 100644 index 00000000000000..ce8b2b7bee4086 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-sean_question_answering_model_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English sean_question_answering_model DistilBertForQuestionAnswering from SeanKuehl +author: John Snow Labs +name: sean_question_answering_model +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sean_question_answering_model` is a English model originally trained by SeanKuehl. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sean_question_answering_model_en_5.5.0_3.0_1725932578446.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sean_question_answering_model_en_5.5.0_3.0_1725932578446.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("sean_question_answering_model","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("sean_question_answering_model", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sean_question_answering_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/SeanKuehl/Sean_Question_Answering_Model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-sean_question_answering_model_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-sean_question_answering_model_pipeline_en.md new file mode 100644 index 00000000000000..c8abfa2eaee0f9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-sean_question_answering_model_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English sean_question_answering_model_pipeline pipeline DistilBertForQuestionAnswering from SeanKuehl +author: John Snow Labs +name: sean_question_answering_model_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sean_question_answering_model_pipeline` is a English model originally trained by SeanKuehl. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sean_question_answering_model_pipeline_en_5.5.0_3.0_1725932589616.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sean_question_answering_model_pipeline_en_5.5.0_3.0_1725932589616.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sean_question_answering_model_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sean_question_answering_model_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sean_question_answering_model_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/SeanKuehl/Sean_Question_Answering_Model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-semeval2020_task11_sinhalese_en.md b/docs/_posts/ahmedlone127/2024-09-10-semeval2020_task11_sinhalese_en.md new file mode 100644 index 00000000000000..e265406a2c9aa7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-semeval2020_task11_sinhalese_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English semeval2020_task11_sinhalese RoBertaEmbeddings from tuscan-chicken-wrap +author: John Snow Labs +name: semeval2020_task11_sinhalese +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`semeval2020_task11_sinhalese` is a English model originally trained by tuscan-chicken-wrap. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/semeval2020_task11_sinhalese_en_5.5.0_3.0_1725931118865.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/semeval2020_task11_sinhalese_en_5.5.0_3.0_1725931118865.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("semeval2020_task11_sinhalese","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("semeval2020_task11_sinhalese","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|semeval2020_task11_sinhalese| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/tuscan-chicken-wrap/semeval2020_task11_si \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-semeval2020_task11_sinhalese_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-semeval2020_task11_sinhalese_pipeline_en.md new file mode 100644 index 00000000000000..fb4c80bc22e9f4 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-semeval2020_task11_sinhalese_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English semeval2020_task11_sinhalese_pipeline pipeline RoBertaEmbeddings from tuscan-chicken-wrap +author: John Snow Labs +name: semeval2020_task11_sinhalese_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`semeval2020_task11_sinhalese_pipeline` is a English model originally trained by tuscan-chicken-wrap. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/semeval2020_task11_sinhalese_pipeline_en_5.5.0_3.0_1725931181196.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/semeval2020_task11_sinhalese_pipeline_en_5.5.0_3.0_1725931181196.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("semeval2020_task11_sinhalese_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("semeval2020_task11_sinhalese_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|semeval2020_task11_sinhalese_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/tuscan-chicken-wrap/semeval2020_task11_si + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-sentiment_bert_large_e8_b16_en.md b/docs/_posts/ahmedlone127/2024-09-10-sentiment_bert_large_e8_b16_en.md new file mode 100644 index 00000000000000..a62208228b077a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-sentiment_bert_large_e8_b16_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English sentiment_bert_large_e8_b16 RoBertaForSequenceClassification from JerryYanJiang +author: John Snow Labs +name: sentiment_bert_large_e8_b16 +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sentiment_bert_large_e8_b16` is a English model originally trained by JerryYanJiang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sentiment_bert_large_e8_b16_en_5.5.0_3.0_1725962725391.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sentiment_bert_large_e8_b16_en_5.5.0_3.0_1725962725391.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("sentiment_bert_large_e8_b16","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("sentiment_bert_large_e8_b16", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sentiment_bert_large_e8_b16| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/JerryYanJiang/sentiment-bert-large-e8-b16 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-sentiment_bert_large_e8_b16_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-sentiment_bert_large_e8_b16_pipeline_en.md new file mode 100644 index 00000000000000..73be28fda98a0e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-sentiment_bert_large_e8_b16_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English sentiment_bert_large_e8_b16_pipeline pipeline RoBertaForSequenceClassification from JerryYanJiang +author: John Snow Labs +name: sentiment_bert_large_e8_b16_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`sentiment_bert_large_e8_b16_pipeline` is a English model originally trained by JerryYanJiang. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/sentiment_bert_large_e8_b16_pipeline_en_5.5.0_3.0_1725962792752.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/sentiment_bert_large_e8_b16_pipeline_en_5.5.0_3.0_1725962792752.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("sentiment_bert_large_e8_b16_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("sentiment_bert_large_e8_b16_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|sentiment_bert_large_e8_b16_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.3 GB| + +## References + +https://huggingface.co/JerryYanJiang/sentiment-bert-large-e8-b16 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-somd_xlm_stage1_v1_en.md b/docs/_posts/ahmedlone127/2024-09-10-somd_xlm_stage1_v1_en.md new file mode 100644 index 00000000000000..18ad49b4a1b7b9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-somd_xlm_stage1_v1_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English somd_xlm_stage1_v1 XlmRoBertaForTokenClassification from ThuyNT03 +author: John Snow Labs +name: somd_xlm_stage1_v1 +date: 2024-09-10 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`somd_xlm_stage1_v1` is a English model originally trained by ThuyNT03. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/somd_xlm_stage1_v1_en_5.5.0_3.0_1725929289048.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/somd_xlm_stage1_v1_en_5.5.0_3.0_1725929289048.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("somd_xlm_stage1_v1","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("somd_xlm_stage1_v1", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|somd_xlm_stage1_v1| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|797.6 MB| + +## References + +https://huggingface.co/ThuyNT03/SOMD-xlm-stage1-v1 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-southern_sotho_all_mpnet_finetuned_comb_1500_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-southern_sotho_all_mpnet_finetuned_comb_1500_pipeline_en.md new file mode 100644 index 00000000000000..313b364f79ac36 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-southern_sotho_all_mpnet_finetuned_comb_1500_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English southern_sotho_all_mpnet_finetuned_comb_1500_pipeline pipeline MPNetEmbeddings from danfeg +author: John Snow Labs +name: southern_sotho_all_mpnet_finetuned_comb_1500_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained MPNetEmbeddings, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`southern_sotho_all_mpnet_finetuned_comb_1500_pipeline` is a English model originally trained by danfeg. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/southern_sotho_all_mpnet_finetuned_comb_1500_pipeline_en_5.5.0_3.0_1725963969692.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/southern_sotho_all_mpnet_finetuned_comb_1500_pipeline_en_5.5.0_3.0_1725963969692.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("southern_sotho_all_mpnet_finetuned_comb_1500_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("southern_sotho_all_mpnet_finetuned_comb_1500_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|southern_sotho_all_mpnet_finetuned_comb_1500_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|406.9 MB| + +## References + +https://huggingface.co/danfeg/ST-ALL-MPNET_Finetuned-COMB-1500 + +## Included Models + +- DocumentAssembler +- MPNetEmbeddings \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-spbert_mlm_zero_en.md b/docs/_posts/ahmedlone127/2024-09-10-spbert_mlm_zero_en.md new file mode 100644 index 00000000000000..0025561d521c14 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-spbert_mlm_zero_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English spbert_mlm_zero BertForQuestionAnswering from razent +author: John Snow Labs +name: spbert_mlm_zero +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, bert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: BertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`spbert_mlm_zero` is a English model originally trained by razent. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/spbert_mlm_zero_en_5.5.0_3.0_1725926620557.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/spbert_mlm_zero_en_5.5.0_3.0_1725926620557.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = BertForQuestionAnswering.pretrained("spbert_mlm_zero","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = BertForQuestionAnswering.pretrained("spbert_mlm_zero", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|spbert_mlm_zero| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|403.1 MB| + +## References + +https://huggingface.co/razent/spbert-mlm-zero \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-spbert_mlm_zero_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-spbert_mlm_zero_pipeline_en.md new file mode 100644 index 00000000000000..511c9e2da937f1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-spbert_mlm_zero_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English spbert_mlm_zero_pipeline pipeline BertForQuestionAnswering from razent +author: John Snow Labs +name: spbert_mlm_zero_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained BertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`spbert_mlm_zero_pipeline` is a English model originally trained by razent. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/spbert_mlm_zero_pipeline_en_5.5.0_3.0_1725926639859.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/spbert_mlm_zero_pipeline_en_5.5.0_3.0_1725926639859.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("spbert_mlm_zero_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("spbert_mlm_zero_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|spbert_mlm_zero_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|403.2 MB| + +## References + +https://huggingface.co/razent/spbert-mlm-zero + +## Included Models + +- MultiDocumentAssembler +- BertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-squad_clip_text_3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-squad_clip_text_3_pipeline_en.md new file mode 100644 index 00000000000000..2408392ba30f26 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-squad_clip_text_3_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English squad_clip_text_3_pipeline pipeline RoBertaForQuestionAnswering from AnonymousSub +author: John Snow Labs +name: squad_clip_text_3_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`squad_clip_text_3_pipeline` is a English model originally trained by AnonymousSub. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/squad_clip_text_3_pipeline_en_5.5.0_3.0_1725959159251.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/squad_clip_text_3_pipeline_en_5.5.0_3.0_1725959159251.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("squad_clip_text_3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("squad_clip_text_3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|squad_clip_text_3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|466.3 MB| + +## References + +https://huggingface.co/AnonymousSub/SQuAD_CLIP_text_3 + +## Included Models + +- MultiDocumentAssembler +- RoBertaForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-squad_qa_model_jamesmcmill_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-squad_qa_model_jamesmcmill_pipeline_en.md new file mode 100644 index 00000000000000..9682f085747c37 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-squad_qa_model_jamesmcmill_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English squad_qa_model_jamesmcmill_pipeline pipeline DistilBertForQuestionAnswering from JamesMcMill +author: John Snow Labs +name: squad_qa_model_jamesmcmill_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`squad_qa_model_jamesmcmill_pipeline` is a English model originally trained by JamesMcMill. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/squad_qa_model_jamesmcmill_pipeline_en_5.5.0_3.0_1725960056284.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/squad_qa_model_jamesmcmill_pipeline_en_5.5.0_3.0_1725960056284.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("squad_qa_model_jamesmcmill_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("squad_qa_model_jamesmcmill_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|squad_qa_model_jamesmcmill_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|247.3 MB| + +## References + +https://huggingface.co/JamesMcMill/squad_qa_model + +## Included Models + +- MultiDocumentAssembler +- DistilBertForQuestionAnswering \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-takalane_sot_roberta_en.md b/docs/_posts/ahmedlone127/2024-09-10-takalane_sot_roberta_en.md new file mode 100644 index 00000000000000..6e3173e5684176 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-takalane_sot_roberta_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English takalane_sot_roberta RoBertaEmbeddings from jannesg +author: John Snow Labs +name: takalane_sot_roberta +date: 2024-09-10 +tags: [en, open_source, onnx, embeddings, roberta] +task: Embeddings +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaEmbeddings +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`takalane_sot_roberta` is a English model originally trained by jannesg. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/takalane_sot_roberta_en_5.5.0_3.0_1725931487927.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/takalane_sot_roberta_en_5.5.0_3.0_1725931487927.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol("text") \ + .setOutputCol("document") + +tokenizer = Tokenizer() \ + .setInputCols("document") \ + .setOutputCol("token") + +embeddings = RoBertaEmbeddings.pretrained("takalane_sot_roberta","en") \ + .setInputCols(["document", "token"]) \ + .setOutputCol("embeddings") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, embeddings]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCol("text") + .setOutputCol("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val embeddings = RoBertaEmbeddings.pretrained("takalane_sot_roberta","en") + .setInputCols(Array("document", "token")) + .setOutputCol("embeddings") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) +val data = Seq("I love spark-nlp").toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|takalane_sot_roberta| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[roberta]| +|Language:|en| +|Size:|310.7 MB| + +## References + +https://huggingface.co/jannesg/takalane_sot_roberta \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-test_rag_model_en.md b/docs/_posts/ahmedlone127/2024-09-10-test_rag_model_en.md new file mode 100644 index 00000000000000..f87b5674644dea --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-test_rag_model_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: English test_rag_model DistilBertForQuestionAnswering from Artix1806 +author: John Snow Labs +name: test_rag_model +date: 2024-09-10 +tags: [en, open_source, onnx, question_answering, distilbert] +task: Question Answering +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: DistilBertForQuestionAnswering +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`test_rag_model` is a English model originally trained by Artix1806. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/test_rag_model_en_5.5.0_3.0_1725932121612.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/test_rag_model_en_5.5.0_3.0_1725932121612.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = MultiDocumentAssembler() \ + .setInputCol(["question", "context"]) \ + .setOutputCol(["document_question", "document_context"]) + +spanClassifier = DistilBertForQuestionAnswering.pretrained("test_rag_model","en") \ + .setInputCols(["document_question","document_context"]) \ + .setOutputCol("answer") + +pipeline = Pipeline().setStages([documentAssembler, spanClassifier]) +data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new MultiDocumentAssembler() + .setInputCol(Array("question", "context")) + .setOutputCol(Array("document_question", "document_context")) + +val spanClassifier = DistilBertForQuestionAnswering.pretrained("test_rag_model", "en") + .setInputCols(Array("document_question","document_context")) + .setOutputCol("answer") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier)) +val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|test_rag_model| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document_question, document_context]| +|Output Labels:|[answer]| +|Language:|en| +|Size:|247.2 MB| + +## References + +https://huggingface.co/Artix1806/test_rag_model \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-training_v2_pipeline_ru.md b/docs/_posts/ahmedlone127/2024-09-10-training_v2_pipeline_ru.md new file mode 100644 index 00000000000000..47d95073c12948 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-training_v2_pipeline_ru.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Russian training_v2_pipeline pipeline WhisperForCTC from SofiaK +author: John Snow Labs +name: training_v2_pipeline +date: 2024-09-10 +tags: [ru, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: ru +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`training_v2_pipeline` is a Russian model originally trained by SofiaK. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/training_v2_pipeline_ru_5.5.0_3.0_1725949074303.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/training_v2_pipeline_ru_5.5.0_3.0_1725949074303.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("training_v2_pipeline", lang = "ru") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("training_v2_pipeline", lang = "ru") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|training_v2_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|ru| +|Size:|641.7 MB| + +## References + +https://huggingface.co/SofiaK/training-v2 + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-twiiter_try8_fold0_en.md b/docs/_posts/ahmedlone127/2024-09-10-twiiter_try8_fold0_en.md new file mode 100644 index 00000000000000..833b2085e4f2cc --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-twiiter_try8_fold0_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English twiiter_try8_fold0 RoBertaForSequenceClassification from yanezh +author: John Snow Labs +name: twiiter_try8_fold0 +date: 2024-09-10 +tags: [en, open_source, onnx, sequence_classification, roberta] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: RoBertaForSequenceClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`twiiter_try8_fold0` is a English model originally trained by yanezh. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/twiiter_try8_fold0_en_5.5.0_3.0_1725966205765.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/twiiter_try8_fold0_en_5.5.0_3.0_1725966205765.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +sequenceClassifier = RoBertaForSequenceClassification.pretrained("twiiter_try8_fold0","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("class") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols(Array("document")) + .setOutputCol("token") + +val sequenceClassifier = RoBertaForSequenceClassification.pretrained("twiiter_try8_fold0", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("class") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|twiiter_try8_fold0| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[class]| +|Language:|en| +|Size:|468.2 MB| + +## References + +https://huggingface.co/yanezh/twiiter_try8_fold0 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-twitter_roberta_base_dec2021_emotion_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-twitter_roberta_base_dec2021_emotion_pipeline_en.md new file mode 100644 index 00000000000000..3bb8d8b5602e7c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-twitter_roberta_base_dec2021_emotion_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English twitter_roberta_base_dec2021_emotion_pipeline pipeline RoBertaForSequenceClassification from cardiffnlp +author: John Snow Labs +name: twitter_roberta_base_dec2021_emotion_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Text Classification +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained RoBertaForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`twitter_roberta_base_dec2021_emotion_pipeline` is a English model originally trained by cardiffnlp. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/twitter_roberta_base_dec2021_emotion_pipeline_en_5.5.0_3.0_1725964826514.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/twitter_roberta_base_dec2021_emotion_pipeline_en_5.5.0_3.0_1725964826514.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("twitter_roberta_base_dec2021_emotion_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("twitter_roberta_base_dec2021_emotion_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|twitter_roberta_base_dec2021_emotion_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|468.3 MB| + +## References + +https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-emotion + +## Included Models + +- DocumentAssembler +- TokenizerModel +- RoBertaForSequenceClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_asr_atc_v6_en.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_asr_atc_v6_en.md new file mode 100644 index 00000000000000..f406a511f0025a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_asr_atc_v6_en.md @@ -0,0 +1,84 @@ +--- +layout: model +title: English whisper_asr_atc_v6 WhisperForCTC from AshtonLKY +author: John Snow Labs +name: whisper_asr_atc_v6 +date: 2024-09-10 +tags: [en, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_asr_atc_v6` is a English model originally trained by AshtonLKY. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_asr_atc_v6_en_5.5.0_3.0_1725944039465.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_asr_atc_v6_en_5.5.0_3.0_1725944039465.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("whisper_asr_atc_v6","en") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("whisper_asr_atc_v6", "en") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_asr_atc_v6| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|en| +|Size:|1.7 GB| + +## References + +https://huggingface.co/AshtonLKY/Whisper_ASR_ATC_v6 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_asr_atc_v6_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_asr_atc_v6_pipeline_en.md new file mode 100644 index 00000000000000..d9a830cb3873a8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_asr_atc_v6_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English whisper_asr_atc_v6_pipeline pipeline WhisperForCTC from AshtonLKY +author: John Snow Labs +name: whisper_asr_atc_v6_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_asr_atc_v6_pipeline` is a English model originally trained by AshtonLKY. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_asr_atc_v6_pipeline_en_5.5.0_3.0_1725944127633.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_asr_atc_v6_pipeline_en_5.5.0_3.0_1725944127633.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("whisper_asr_atc_v6_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("whisper_asr_atc_v6_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_asr_atc_v6_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.7 GB| + +## References + +https://huggingface.co/AshtonLKY/Whisper_ASR_ATC_v6 + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_base_russian_whitemouse84_pipeline_ru.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_base_russian_whitemouse84_pipeline_ru.md new file mode 100644 index 00000000000000..f77643b57ffae3 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_base_russian_whitemouse84_pipeline_ru.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Russian whisper_base_russian_whitemouse84_pipeline pipeline WhisperForCTC from whitemouse84 +author: John Snow Labs +name: whisper_base_russian_whitemouse84_pipeline +date: 2024-09-10 +tags: [ru, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: ru +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_base_russian_whitemouse84_pipeline` is a Russian model originally trained by whitemouse84. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_base_russian_whitemouse84_pipeline_ru_5.5.0_3.0_1725939962369.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_base_russian_whitemouse84_pipeline_ru_5.5.0_3.0_1725939962369.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("whisper_base_russian_whitemouse84_pipeline", lang = "ru") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("whisper_base_russian_whitemouse84_pipeline", lang = "ru") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_base_russian_whitemouse84_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|ru| +|Size:|642.0 MB| + +## References + +https://huggingface.co/whitemouse84/whisper-base-ru + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_base_russian_whitemouse84_ru.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_base_russian_whitemouse84_ru.md new file mode 100644 index 00000000000000..5e48c8212d4634 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_base_russian_whitemouse84_ru.md @@ -0,0 +1,84 @@ +--- +layout: model +title: Russian whisper_base_russian_whitemouse84 WhisperForCTC from whitemouse84 +author: John Snow Labs +name: whisper_base_russian_whitemouse84 +date: 2024-09-10 +tags: [ru, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: ru +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_base_russian_whitemouse84` is a Russian model originally trained by whitemouse84. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_base_russian_whitemouse84_ru_5.5.0_3.0_1725939931166.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_base_russian_whitemouse84_ru_5.5.0_3.0_1725939931166.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("whisper_base_russian_whitemouse84","ru") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("whisper_base_russian_whitemouse84", "ru") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_base_russian_whitemouse84| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|ru| +|Size:|642.0 MB| + +## References + +https://huggingface.co/whitemouse84/whisper-base-ru \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_arabic_raghadalghonaim_ar.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_arabic_raghadalghonaim_ar.md new file mode 100644 index 00000000000000..b1ee083bfddf21 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_arabic_raghadalghonaim_ar.md @@ -0,0 +1,84 @@ +--- +layout: model +title: Arabic whisper_small_arabic_raghadalghonaim WhisperForCTC from raghadalghonaim +author: John Snow Labs +name: whisper_small_arabic_raghadalghonaim +date: 2024-09-10 +tags: [ar, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: ar +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_arabic_raghadalghonaim` is a Arabic model originally trained by raghadalghonaim. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_arabic_raghadalghonaim_ar_5.5.0_3.0_1725943472572.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_arabic_raghadalghonaim_ar_5.5.0_3.0_1725943472572.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("whisper_small_arabic_raghadalghonaim","ar") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("whisper_small_arabic_raghadalghonaim", "ar") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_arabic_raghadalghonaim| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|ar| +|Size:|1.7 GB| + +## References + +https://huggingface.co/raghadalghonaim/whisper-small-ar \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_chinese_hanson92828_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_chinese_hanson92828_pipeline_en.md new file mode 100644 index 00000000000000..d04221023d8171 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_chinese_hanson92828_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English whisper_small_chinese_hanson92828_pipeline pipeline WhisperForCTC from hanson92828 +author: John Snow Labs +name: whisper_small_chinese_hanson92828_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_chinese_hanson92828_pipeline` is a English model originally trained by hanson92828. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_chinese_hanson92828_pipeline_en_5.5.0_3.0_1725940256295.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_chinese_hanson92828_pipeline_en_5.5.0_3.0_1725940256295.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("whisper_small_chinese_hanson92828_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("whisper_small_chinese_hanson92828_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_chinese_hanson92828_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.7 GB| + +## References + +https://huggingface.co/hanson92828/whisper-small-chinese + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_divehi_sanchit_gandhi_dv.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_divehi_sanchit_gandhi_dv.md new file mode 100644 index 00000000000000..693a36865d9aac --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_divehi_sanchit_gandhi_dv.md @@ -0,0 +1,84 @@ +--- +layout: model +title: Dhivehi, Divehi, Maldivian whisper_small_divehi_sanchit_gandhi WhisperForCTC from sanchit-gandhi +author: John Snow Labs +name: whisper_small_divehi_sanchit_gandhi +date: 2024-09-10 +tags: [dv, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: dv +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_divehi_sanchit_gandhi` is a Dhivehi, Divehi, Maldivian model originally trained by sanchit-gandhi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_divehi_sanchit_gandhi_dv_5.5.0_3.0_1725942438806.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_divehi_sanchit_gandhi_dv_5.5.0_3.0_1725942438806.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("whisper_small_divehi_sanchit_gandhi","dv") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("whisper_small_divehi_sanchit_gandhi", "dv") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_divehi_sanchit_gandhi| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|dv| +|Size:|1.7 GB| + +## References + +https://huggingface.co/sanchit-gandhi/whisper-small-dv \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_eg_en.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_eg_en.md new file mode 100644 index 00000000000000..79afafe3b1d3ed --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_eg_en.md @@ -0,0 +1,84 @@ +--- +layout: model +title: English whisper_small_eg WhisperForCTC from tawreck-hasaballah +author: John Snow Labs +name: whisper_small_eg +date: 2024-09-10 +tags: [en, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_eg` is a English model originally trained by tawreck-hasaballah. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_eg_en_5.5.0_3.0_1725940362968.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_eg_en_5.5.0_3.0_1725940362968.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("whisper_small_eg","en") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("whisper_small_eg", "en") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_eg| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|en| +|Size:|1.7 GB| + +## References + +https://huggingface.co/tawreck-hasaballah/whisper-small-eg \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_eg_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_eg_pipeline_en.md new file mode 100644 index 00000000000000..020d2775a4995e --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_eg_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English whisper_small_eg_pipeline pipeline WhisperForCTC from tawreck-hasaballah +author: John Snow Labs +name: whisper_small_eg_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_eg_pipeline` is a English model originally trained by tawreck-hasaballah. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_eg_pipeline_en_5.5.0_3.0_1725940441158.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_eg_pipeline_en_5.5.0_3.0_1725940441158.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("whisper_small_eg_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("whisper_small_eg_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_eg_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.7 GB| + +## References + +https://huggingface.co/tawreck-hasaballah/whisper-small-eg + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_hindi_harshitjoshi_hi.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_hindi_harshitjoshi_hi.md new file mode 100644 index 00000000000000..bc08947226e3cd --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_hindi_harshitjoshi_hi.md @@ -0,0 +1,84 @@ +--- +layout: model +title: Hindi whisper_small_hindi_harshitjoshi WhisperForCTC from HarshitJoshi +author: John Snow Labs +name: whisper_small_hindi_harshitjoshi +date: 2024-09-10 +tags: [hi, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: hi +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_hindi_harshitjoshi` is a Hindi model originally trained by HarshitJoshi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_hindi_harshitjoshi_hi_5.5.0_3.0_1725944771007.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_hindi_harshitjoshi_hi_5.5.0_3.0_1725944771007.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("whisper_small_hindi_harshitjoshi","hi") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("whisper_small_hindi_harshitjoshi", "hi") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_hindi_harshitjoshi| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|hi| +|Size:|1.7 GB| + +## References + +https://huggingface.co/HarshitJoshi/whisper-small-Hindi \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_hindi_harshitjoshi_pipeline_hi.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_hindi_harshitjoshi_pipeline_hi.md new file mode 100644 index 00000000000000..c324ffc19502a8 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_hindi_harshitjoshi_pipeline_hi.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Hindi whisper_small_hindi_harshitjoshi_pipeline pipeline WhisperForCTC from HarshitJoshi +author: John Snow Labs +name: whisper_small_hindi_harshitjoshi_pipeline +date: 2024-09-10 +tags: [hi, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: hi +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_hindi_harshitjoshi_pipeline` is a Hindi model originally trained by HarshitJoshi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_hindi_harshitjoshi_pipeline_hi_5.5.0_3.0_1725944855629.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_hindi_harshitjoshi_pipeline_hi_5.5.0_3.0_1725944855629.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("whisper_small_hindi_harshitjoshi_pipeline", lang = "hi") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("whisper_small_hindi_harshitjoshi_pipeline", lang = "hi") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_hindi_harshitjoshi_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|hi| +|Size:|1.7 GB| + +## References + +https://huggingface.co/HarshitJoshi/whisper-small-Hindi + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_hindi_tortoise17_hi.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_hindi_tortoise17_hi.md new file mode 100644 index 00000000000000..db1020c3da372c --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_hindi_tortoise17_hi.md @@ -0,0 +1,84 @@ +--- +layout: model +title: Hindi whisper_small_hindi_tortoise17 WhisperForCTC from Tortoise17 +author: John Snow Labs +name: whisper_small_hindi_tortoise17 +date: 2024-09-10 +tags: [hi, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: hi +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_hindi_tortoise17` is a Hindi model originally trained by Tortoise17. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_hindi_tortoise17_hi_5.5.0_3.0_1725954598483.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_hindi_tortoise17_hi_5.5.0_3.0_1725954598483.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("whisper_small_hindi_tortoise17","hi") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("whisper_small_hindi_tortoise17", "hi") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_hindi_tortoise17| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|hi| +|Size:|1.1 GB| + +## References + +https://huggingface.co/Tortoise17/whisper-small-hi \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_hindi_tortoise17_pipeline_hi.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_hindi_tortoise17_pipeline_hi.md new file mode 100644 index 00000000000000..daa14b770281fb --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_hindi_tortoise17_pipeline_hi.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Hindi whisper_small_hindi_tortoise17_pipeline pipeline WhisperForCTC from Tortoise17 +author: John Snow Labs +name: whisper_small_hindi_tortoise17_pipeline +date: 2024-09-10 +tags: [hi, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: hi +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_hindi_tortoise17_pipeline` is a Hindi model originally trained by Tortoise17. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_hindi_tortoise17_pipeline_hi_5.5.0_3.0_1725954880855.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_hindi_tortoise17_pipeline_hi_5.5.0_3.0_1725954880855.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("whisper_small_hindi_tortoise17_pipeline", lang = "hi") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("whisper_small_hindi_tortoise17_pipeline", lang = "hi") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_hindi_tortoise17_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|hi| +|Size:|1.1 GB| + +## References + +https://huggingface.co/Tortoise17/whisper-small-hi + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_init_en.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_init_en.md new file mode 100644 index 00000000000000..0d2ea517751d13 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_init_en.md @@ -0,0 +1,84 @@ +--- +layout: model +title: English whisper_small_init WhisperForCTC from marccgrau +author: John Snow Labs +name: whisper_small_init +date: 2024-09-10 +tags: [en, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_init` is a English model originally trained by marccgrau. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_init_en_5.5.0_3.0_1725942441183.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_init_en_5.5.0_3.0_1725942441183.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("whisper_small_init","en") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("whisper_small_init", "en") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_init| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|en| +|Size:|1.7 GB| + +## References + +https://huggingface.co/marccgrau/whisper-small-init \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_init_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_init_pipeline_en.md new file mode 100644 index 00000000000000..8f806b1b799646 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_init_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English whisper_small_init_pipeline pipeline WhisperForCTC from marccgrau +author: John Snow Labs +name: whisper_small_init_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_init_pipeline` is a English model originally trained by marccgrau. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_init_pipeline_en_5.5.0_3.0_1725942531219.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_init_pipeline_en_5.5.0_3.0_1725942531219.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("whisper_small_init_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("whisper_small_init_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_init_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.7 GB| + +## References + +https://huggingface.co/marccgrau/whisper-small-init + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_punjabi_pipeline_pa.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_punjabi_pipeline_pa.md new file mode 100644 index 00000000000000..451b66846a6eed --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_punjabi_pipeline_pa.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Panjabi, Punjabi whisper_small_punjabi_pipeline pipeline WhisperForCTC from nayaniiii +author: John Snow Labs +name: whisper_small_punjabi_pipeline +date: 2024-09-10 +tags: [pa, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: pa +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_punjabi_pipeline` is a Panjabi, Punjabi model originally trained by nayaniiii. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_punjabi_pipeline_pa_5.5.0_3.0_1725942363960.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_punjabi_pipeline_pa_5.5.0_3.0_1725942363960.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("whisper_small_punjabi_pipeline", lang = "pa") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("whisper_small_punjabi_pipeline", lang = "pa") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_punjabi_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|pa| +|Size:|1.7 GB| + +## References + +https://huggingface.co/nayaniiii/whisper-small-punjabi + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_sanskasr_pipeline_sa.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_sanskasr_pipeline_sa.md new file mode 100644 index 00000000000000..a857d830a739aa --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_sanskasr_pipeline_sa.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Sanskrit whisper_small_sanskasr_pipeline pipeline WhisperForCTC from bvkbharadwaj +author: John Snow Labs +name: whisper_small_sanskasr_pipeline +date: 2024-09-10 +tags: [sa, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: sa +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_sanskasr_pipeline` is a Sanskrit model originally trained by bvkbharadwaj. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_sanskasr_pipeline_sa_5.5.0_3.0_1725942830055.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_sanskasr_pipeline_sa_5.5.0_3.0_1725942830055.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("whisper_small_sanskasr_pipeline", lang = "sa") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("whisper_small_sanskasr_pipeline", lang = "sa") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_sanskasr_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|sa| +|Size:|1.7 GB| + +## References + +https://huggingface.co/bvkbharadwaj/whisper-small-sanskasr + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_sanskasr_sa.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_sanskasr_sa.md new file mode 100644 index 00000000000000..eae7aa4c19d73a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_sanskasr_sa.md @@ -0,0 +1,84 @@ +--- +layout: model +title: Sanskrit whisper_small_sanskasr WhisperForCTC from bvkbharadwaj +author: John Snow Labs +name: whisper_small_sanskasr +date: 2024-09-10 +tags: [sa, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: sa +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_sanskasr` is a Sanskrit model originally trained by bvkbharadwaj. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_sanskasr_sa_5.5.0_3.0_1725942749482.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_sanskasr_sa_5.5.0_3.0_1725942749482.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("whisper_small_sanskasr","sa") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("whisper_small_sanskasr", "sa") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_sanskasr| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|sa| +|Size:|1.7 GB| + +## References + +https://huggingface.co/bvkbharadwaj/whisper-small-sanskasr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_wolof_cifope_en.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_wolof_cifope_en.md new file mode 100644 index 00000000000000..4c16b57b6be1f9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_wolof_cifope_en.md @@ -0,0 +1,84 @@ +--- +layout: model +title: English whisper_small_wolof_cifope WhisperForCTC from cifope +author: John Snow Labs +name: whisper_small_wolof_cifope +date: 2024-09-10 +tags: [en, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_wolof_cifope` is a English model originally trained by cifope. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_wolof_cifope_en_5.5.0_3.0_1725944361328.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_wolof_cifope_en_5.5.0_3.0_1725944361328.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("whisper_small_wolof_cifope","en") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("whisper_small_wolof_cifope", "en") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_wolof_cifope| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|en| +|Size:|1.7 GB| + +## References + +https://huggingface.co/cifope/whisper-small-wolof \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_small_wolof_cifope_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_wolof_cifope_pipeline_en.md new file mode 100644 index 00000000000000..3be20c8188fe82 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_small_wolof_cifope_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English whisper_small_wolof_cifope_pipeline pipeline WhisperForCTC from cifope +author: John Snow Labs +name: whisper_small_wolof_cifope_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_small_wolof_cifope_pipeline` is a English model originally trained by cifope. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_small_wolof_cifope_pipeline_en_5.5.0_3.0_1725944444118.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_small_wolof_cifope_pipeline_en_5.5.0_3.0_1725944444118.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("whisper_small_wolof_cifope_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("whisper_small_wolof_cifope_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_small_wolof_cifope_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.7 GB| + +## References + +https://huggingface.co/cifope/whisper-small-wolof + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_speech_small_en.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_speech_small_en.md new file mode 100644 index 00000000000000..a56131ce23b964 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_speech_small_en.md @@ -0,0 +1,84 @@ +--- +layout: model +title: English whisper_speech_small WhisperForCTC from fatipd +author: John Snow Labs +name: whisper_speech_small +date: 2024-09-10 +tags: [en, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_speech_small` is a English model originally trained by fatipd. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_speech_small_en_5.5.0_3.0_1725941332243.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_speech_small_en_5.5.0_3.0_1725941332243.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("whisper_speech_small","en") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("whisper_speech_small", "en") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_speech_small| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|en| +|Size:|1.7 GB| + +## References + +https://huggingface.co/fatipd/whisper-speech-small \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_speech_small_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_speech_small_pipeline_en.md new file mode 100644 index 00000000000000..9c87f521fcb653 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_speech_small_pipeline_en.md @@ -0,0 +1,69 @@ +--- +layout: model +title: English whisper_speech_small_pipeline pipeline WhisperForCTC from fatipd +author: John Snow Labs +name: whisper_speech_small_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_speech_small_pipeline` is a English model originally trained by fatipd. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_speech_small_pipeline_en_5.5.0_3.0_1725941413109.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_speech_small_pipeline_en_5.5.0_3.0_1725941413109.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("whisper_speech_small_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("whisper_speech_small_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_speech_small_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|1.7 GB| + +## References + +https://huggingface.co/fatipd/whisper-speech-small + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_tiny_malayalam_ml.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_tiny_malayalam_ml.md new file mode 100644 index 00000000000000..cf7b90379aa9ad --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_tiny_malayalam_ml.md @@ -0,0 +1,84 @@ +--- +layout: model +title: Malayalam whisper_tiny_malayalam WhisperForCTC from parambharat +author: John Snow Labs +name: whisper_tiny_malayalam +date: 2024-09-10 +tags: [ml, open_source, onnx, asr, whisper] +task: Automatic Speech Recognition +language: ml +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: WhisperForCTC +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_tiny_malayalam` is a Malayalam model originally trained by parambharat. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_tiny_malayalam_ml_5.5.0_3.0_1725945455016.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_tiny_malayalam_ml_5.5.0_3.0_1725945455016.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +audioAssembler = AudioAssembler() \ + .setInputCol("audio_content") \ + .setOutputCol("audio_assembler") + +speechToText = WhisperForCTC.pretrained("whisper_tiny_malayalam","ml") \ + .setInputCols(["audio_assembler"]) \ + .setOutputCol("text") + +pipeline = Pipeline().setStages([audioAssembler, speechToText]) +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val audioAssembler = new DocumentAssembler() + .setInputCols("audio_content") + .setOutputCols("audio_assembler") + +val speechToText = WhisperForCTC.pretrained("whisper_tiny_malayalam", "ml") + .setInputCols(Array("audio_assembler")) + .setOutputCol("text") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, speechToText)) +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_tiny_malayalam| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[audio_assembler]| +|Output Labels:|[text]| +|Language:|ml| +|Size:|391.0 MB| + +## References + +https://huggingface.co/parambharat/whisper-tiny-ml \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-whisper_tiny_malayalam_pipeline_ml.md b/docs/_posts/ahmedlone127/2024-09-10-whisper_tiny_malayalam_pipeline_ml.md new file mode 100644 index 00000000000000..847f94c7152063 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-whisper_tiny_malayalam_pipeline_ml.md @@ -0,0 +1,69 @@ +--- +layout: model +title: Malayalam whisper_tiny_malayalam_pipeline pipeline WhisperForCTC from parambharat +author: John Snow Labs +name: whisper_tiny_malayalam_pipeline +date: 2024-09-10 +tags: [ml, open_source, pipeline, onnx] +task: Automatic Speech Recognition +language: ml +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained WhisperForCTC, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`whisper_tiny_malayalam_pipeline` is a Malayalam model originally trained by parambharat. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/whisper_tiny_malayalam_pipeline_ml_5.5.0_3.0_1725945474292.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/whisper_tiny_malayalam_pipeline_ml_5.5.0_3.0_1725945474292.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("whisper_tiny_malayalam_pipeline", lang = "ml") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("whisper_tiny_malayalam_pipeline", lang = "ml") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|whisper_tiny_malayalam_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|ml| +|Size:|391.0 MB| + +## References + +https://huggingface.co/parambharat/whisper-tiny-ml + +## Included Models + +- AudioAssembler +- WhisperForCTC \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_germeval_14_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_germeval_14_en.md new file mode 100644 index 00000000000000..39828ca365d1d1 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_germeval_14_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_germeval_14 XlmRoBertaForTokenClassification from stefanieZ +author: John Snow Labs +name: xlm_roberta_base_finetuned_germeval_14 +date: 2024-09-10 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_germeval_14` is a English model originally trained by stefanieZ. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_germeval_14_en_5.5.0_3.0_1725928527266.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_germeval_14_en_5.5.0_3.0_1725928527266.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_germeval_14","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_germeval_14", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_germeval_14| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|861.3 MB| + +## References + +https://huggingface.co/stefanieZ/xlm-roberta-base-finetuned-germeval-14 \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_germeval_14_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_germeval_14_pipeline_en.md new file mode 100644 index 00000000000000..42549724c16530 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_germeval_14_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_germeval_14_pipeline pipeline XlmRoBertaForTokenClassification from stefanieZ +author: John Snow Labs +name: xlm_roberta_base_finetuned_germeval_14_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_germeval_14_pipeline` is a English model originally trained by stefanieZ. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_germeval_14_pipeline_en_5.5.0_3.0_1725928595033.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_germeval_14_pipeline_en_5.5.0_3.0_1725928595033.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_germeval_14_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_germeval_14_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_germeval_14_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|861.3 MB| + +## References + +https://huggingface.co/stefanieZ/xlm-roberta-base-finetuned-germeval-14 + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_bengali_thepinakiroy_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_bengali_thepinakiroy_en.md new file mode 100644 index 00000000000000..e95ccc4b5c5a70 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_bengali_thepinakiroy_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_bengali_thepinakiroy XlmRoBertaForTokenClassification from thepinakiroy +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_bengali_thepinakiroy +date: 2024-09-10 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_bengali_thepinakiroy` is a English model originally trained by thepinakiroy. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_bengali_thepinakiroy_en_5.5.0_3.0_1725928635168.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_bengali_thepinakiroy_en_5.5.0_3.0_1725928635168.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_bengali_thepinakiroy","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_bengali_thepinakiroy", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_bengali_thepinakiroy| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|833.2 MB| + +## References + +https://huggingface.co/thepinakiroy/xlm-roberta-base-finetuned-panx-bn \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_english_udon3_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_english_udon3_pipeline_en.md new file mode 100644 index 00000000000000..4fc088b9174629 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_english_udon3_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_english_udon3_pipeline pipeline XlmRoBertaForTokenClassification from udon3 +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_english_udon3_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_english_udon3_pipeline` is a English model originally trained by udon3. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_english_udon3_pipeline_en_5.5.0_3.0_1725927956239.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_english_udon3_pipeline_en_5.5.0_3.0_1725927956239.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_english_udon3_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_english_udon3_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_english_udon3_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|826.4 MB| + +## References + +https://huggingface.co/udon3/xlm-roberta-base-finetuned-panx-en + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_french_mj03_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_french_mj03_en.md new file mode 100644 index 00000000000000..b0d44054745d4d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_french_mj03_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_mj03 XlmRoBertaForTokenClassification from MJ03 +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_mj03 +date: 2024-09-10 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_mj03` is a English model originally trained by MJ03. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_mj03_en_5.5.0_3.0_1725929528364.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_mj03_en_5.5.0_3.0_1725929528364.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_mj03","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_mj03", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_mj03| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|840.9 MB| + +## References + +https://huggingface.co/MJ03/xlm-roberta-base-finetuned-panx-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_french_omersubasi_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_french_omersubasi_en.md new file mode 100644 index 00000000000000..3d417c2f2f051a --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_french_omersubasi_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_omersubasi XlmRoBertaForTokenClassification from omersubasi +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_omersubasi +date: 2024-09-10 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_omersubasi` is a English model originally trained by omersubasi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_omersubasi_en_5.5.0_3.0_1725928238823.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_omersubasi_en_5.5.0_3.0_1725928238823.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_omersubasi","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_omersubasi", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_omersubasi| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|831.2 MB| + +## References + +https://huggingface.co/omersubasi/xlm-roberta-base-finetuned-panx-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_french_omersubasi_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_french_omersubasi_pipeline_en.md new file mode 100644 index 00000000000000..d97bd3466529c7 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_french_omersubasi_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_omersubasi_pipeline pipeline XlmRoBertaForTokenClassification from omersubasi +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_omersubasi_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_omersubasi_pipeline` is a English model originally trained by omersubasi. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_omersubasi_pipeline_en_5.5.0_3.0_1725928321508.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_omersubasi_pipeline_en_5.5.0_3.0_1725928321508.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_french_omersubasi_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_french_omersubasi_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_omersubasi_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|831.2 MB| + +## References + +https://huggingface.co/omersubasi/xlm-roberta-base-finetuned-panx-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_french_ryatora_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_french_ryatora_en.md new file mode 100644 index 00000000000000..485e8648447112 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_french_ryatora_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_french_ryatora XlmRoBertaForTokenClassification from ryatora +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_french_ryatora +date: 2024-09-10 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_french_ryatora` is a English model originally trained by ryatora. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_ryatora_en_5.5.0_3.0_1725929462169.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_french_ryatora_en_5.5.0_3.0_1725929462169.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_ryatora","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_french_ryatora", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_french_ryatora| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|831.2 MB| + +## References + +https://huggingface.co/ryatora/xlm-roberta-base-finetuned-panx-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_dkoh12_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_dkoh12_en.md new file mode 100644 index 00000000000000..39b6c84638d162 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_dkoh12_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_dkoh12 XlmRoBertaForTokenClassification from dkoh12 +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_dkoh12 +date: 2024-09-10 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_dkoh12` is a English model originally trained by dkoh12. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_dkoh12_en_5.5.0_3.0_1725928901844.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_dkoh12_en_5.5.0_3.0_1725928901844.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_dkoh12","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_dkoh12", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_dkoh12| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|853.8 MB| + +## References + +https://huggingface.co/dkoh12/xlm-roberta-base-finetuned-panx-de \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_dkoh12_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_dkoh12_pipeline_en.md new file mode 100644 index 00000000000000..d05ce04cac2794 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_dkoh12_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_dkoh12_pipeline pipeline XlmRoBertaForTokenClassification from dkoh12 +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_dkoh12_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_dkoh12_pipeline` is a English model originally trained by dkoh12. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_dkoh12_pipeline_en_5.5.0_3.0_1725928969313.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_dkoh12_pipeline_en_5.5.0_3.0_1725928969313.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_dkoh12_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_dkoh12_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_dkoh12_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|853.8 MB| + +## References + +https://huggingface.co/dkoh12/xlm-roberta-base-finetuned-panx-de + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_french_jaemin12_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_french_jaemin12_pipeline_en.md new file mode 100644 index 00000000000000..d97dbacc361fee --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_french_jaemin12_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_jaemin12_pipeline pipeline XlmRoBertaForTokenClassification from jaemin12 +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_jaemin12_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_jaemin12_pipeline` is a English model originally trained by jaemin12. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_jaemin12_pipeline_en_5.5.0_3.0_1725927952952.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_jaemin12_pipeline_en_5.5.0_3.0_1725927952952.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_jaemin12_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_jaemin12_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_jaemin12_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|858.2 MB| + +## References + +https://huggingface.co/jaemin12/xlm-roberta-base-finetuned-panx-de-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_french_paww_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_french_paww_en.md new file mode 100644 index 00000000000000..5396fbaa72f87f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_french_paww_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_paww XlmRoBertaForTokenClassification from paww +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_paww +date: 2024-09-10 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_paww` is a English model originally trained by paww. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_paww_en_5.5.0_3.0_1725928716587.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_paww_en_5.5.0_3.0_1725928716587.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_paww","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_paww", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_paww| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|858.2 MB| + +## References + +https://huggingface.co/paww/xlm-roberta-base-finetuned-panx-de-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_french_paww_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_french_paww_pipeline_en.md new file mode 100644 index 00000000000000..451dc9ce765c5f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_french_paww_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_paww_pipeline pipeline XlmRoBertaForTokenClassification from paww +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_paww_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_paww_pipeline` is a English model originally trained by paww. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_paww_pipeline_en_5.5.0_3.0_1725928781801.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_paww_pipeline_en_5.5.0_3.0_1725928781801.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_paww_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_french_paww_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_paww_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|858.2 MB| + +## References + +https://huggingface.co/paww/xlm-roberta-base-finetuned-panx-de-fr + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_french_shinta0615_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_french_shinta0615_en.md new file mode 100644 index 00000000000000..5573b645cbb899 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_french_shinta0615_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_french_shinta0615 XlmRoBertaForTokenClassification from shinta0615 +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_french_shinta0615 +date: 2024-09-10 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_french_shinta0615` is a English model originally trained by shinta0615. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_shinta0615_en_5.5.0_3.0_1725927890187.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_french_shinta0615_en_5.5.0_3.0_1725927890187.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_shinta0615","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_german_french_shinta0615", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_french_shinta0615| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|858.2 MB| + +## References + +https://huggingface.co/shinta0615/xlm-roberta-base-finetuned-panx-de-fr \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_kiri1701_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_kiri1701_pipeline_en.md new file mode 100644 index 00000000000000..b142ef51d4405d --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_german_kiri1701_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_german_kiri1701_pipeline pipeline XlmRoBertaForTokenClassification from kiri1701 +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_german_kiri1701_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_german_kiri1701_pipeline` is a English model originally trained by kiri1701. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_kiri1701_pipeline_en_5.5.0_3.0_1725929041144.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_german_kiri1701_pipeline_en_5.5.0_3.0_1725929041144.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_kiri1701_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_german_kiri1701_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_german_kiri1701_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|853.8 MB| + +## References + +https://huggingface.co/kiri1701/xlm-roberta-base-finetuned-panx-de + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_italian_sorabe_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_italian_sorabe_en.md new file mode 100644 index 00000000000000..43bd766b5ff28f --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_italian_sorabe_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_italian_sorabe XlmRoBertaForTokenClassification from SORABE +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_italian_sorabe +date: 2024-09-10 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_italian_sorabe` is a English model originally trained by SORABE. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_italian_sorabe_en_5.5.0_3.0_1725929634722.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_italian_sorabe_en_5.5.0_3.0_1725929634722.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_italian_sorabe","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_finetuned_panx_italian_sorabe", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_italian_sorabe| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|816.7 MB| + +## References + +https://huggingface.co/SORABE/xlm-roberta-base-finetuned-panx-it \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_korean_japanese_pipeline_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_korean_japanese_pipeline_en.md new file mode 100644 index 00000000000000..6235dabdef1bf9 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_finetuned_panx_korean_japanese_pipeline_en.md @@ -0,0 +1,70 @@ +--- +layout: model +title: English xlm_roberta_base_finetuned_panx_korean_japanese_pipeline pipeline XlmRoBertaForTokenClassification from Noveled +author: John Snow Labs +name: xlm_roberta_base_finetuned_panx_korean_japanese_pipeline +date: 2024-09-10 +tags: [en, open_source, pipeline, onnx] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +annotator: PipelineModel +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_finetuned_panx_korean_japanese_pipeline` is a English model originally trained by Noveled. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_korean_japanese_pipeline_en_5.5.0_3.0_1725928510467.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_finetuned_panx_korean_japanese_pipeline_en_5.5.0_3.0_1725928510467.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +pipeline = PretrainedPipeline("xlm_roberta_base_finetuned_panx_korean_japanese_pipeline", lang = "en") +annotations = pipeline.transform(df) + +``` +```scala + +val pipeline = new PretrainedPipeline("xlm_roberta_base_finetuned_panx_korean_japanese_pipeline", lang = "en") +val annotations = pipeline.transform(df) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_finetuned_panx_korean_japanese_pipeline| +|Type:|pipeline| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|832.8 MB| + +## References + +https://huggingface.co/Noveled/xlm-roberta-base-finetuned-panx-ko-ja + +## Included Models + +- DocumentAssembler +- TokenizerModel +- XlmRoBertaForTokenClassification \ No newline at end of file diff --git a/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_ft_udpos213_top3lang_southern_sotho_en.md b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_ft_udpos213_top3lang_southern_sotho_en.md new file mode 100644 index 00000000000000..35a1809ed80aa2 --- /dev/null +++ b/docs/_posts/ahmedlone127/2024-09-10-xlm_roberta_base_ft_udpos213_top3lang_southern_sotho_en.md @@ -0,0 +1,94 @@ +--- +layout: model +title: English xlm_roberta_base_ft_udpos213_top3lang_southern_sotho XlmRoBertaForTokenClassification from iceman2434 +author: John Snow Labs +name: xlm_roberta_base_ft_udpos213_top3lang_southern_sotho +date: 2024-09-10 +tags: [en, open_source, onnx, token_classification, xlm_roberta, ner] +task: Named Entity Recognition +language: en +edition: Spark NLP 5.5.0 +spark_version: 3.0 +supported: true +engine: onnx +annotator: XlmRoBertaForTokenClassification +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +Pretrained XlmRoBertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`xlm_roberta_base_ft_udpos213_top3lang_southern_sotho` is a English model originally trained by iceman2434. + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_ft_udpos213_top3lang_southern_sotho_en_5.5.0_3.0_1725928858844.zip){:.button.button-orange.button-orange-trans.arr.button-icon} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/xlm_roberta_base_ft_udpos213_top3lang_southern_sotho_en_5.5.0_3.0_1725928858844.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python + +documentAssembler = DocumentAssembler() \ + .setInputCol('text') \ + .setOutputCol('document') + +tokenizer = Tokenizer() \ + .setInputCols(['document']) \ + .setOutputCol('token') + +tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_ft_udpos213_top3lang_southern_sotho","en") \ + .setInputCols(["documents","token"]) \ + .setOutputCol("ner") + +pipeline = Pipeline().setStages([documentAssembler, tokenizer, tokenClassifier]) +data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text") +pipelineModel = pipeline.fit(data) +pipelineDF = pipelineModel.transform(data) + +``` +```scala + +val documentAssembler = new DocumentAssembler() + .setInputCols("text") + .setOutputCols("document") + +val tokenizer = new Tokenizer() + .setInputCols("document") + .setOutputCol("token") + +val tokenClassifier = XlmRoBertaForTokenClassification.pretrained("xlm_roberta_base_ft_udpos213_top3lang_southern_sotho", "en") + .setInputCols(Array("documents","token")) + .setOutputCol("ner") + +val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier)) +val data = Seq("I love spark-nlp").toDS.toDF("text") +val pipelineModel = pipeline.fit(data) +val pipelineDF = pipelineModel.transform(data) + +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|xlm_roberta_base_ft_udpos213_top3lang_southern_sotho| +|Compatibility:|Spark NLP 5.5.0+| +|License:|Open Source| +|Edition:|Official| +|Input Labels:|[document, token]| +|Output Labels:|[ner]| +|Language:|en| +|Size:|790.5 MB| + +## References + +https://huggingface.co/iceman2434/xlm-roberta-base_ft_udpos213-top3lang-st \ No newline at end of file