-
Notifications
You must be signed in to change notification settings - Fork 718
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
2023-06-26-distilbert_embeddings_finetuned_sarcasm_classification_en (#…
…13867) * Add model 2023-06-26-distilbert_embeddings_finetuned_sarcasm_classification_en * Add model 2023-06-26-distilbert_embeddings_distilbert_base_indonesian_id * Add model 2023-06-26-distilbert_embeddings_BERTino_it * Add model 2023-06-26-distilbert_embeddings_distilbert_base_uncased_sparse_85_unstructured_pruneofa_en * Add model 2023-06-26-distilbert_embeddings_malaysian_distilbert_small_ms * Add model 2023-06-26-distilbert_embeddings_distilbert_fa_zwnj_base_fa * Add model 2023-06-26-distilbert_embeddings_javanese_distilbert_small_jv * Add model 2023-06-26-distilbert_embeddings_javanese_distilbert_small_imdb_jv * Add model 2023-06-26-distilbert_embeddings_indic_transformers_hi_distilbert_hi * Add model 2023-06-26-distilbert_embeddings_marathi_distilbert_mr * Add model 2023-06-26-distilbert_embeddings_indic_transformers_bn_distilbert_bn * Add model 2023-06-26-distilbert_embeddings_distilbert_base_uncased_sparse_90_unstructured_pruneofa_en * Add model 2023-06-26-deberta_embeddings_xsmall_dapt_scientific_papers_pubmed_en * Add model 2023-06-26-deberta_embeddings_spm_vie_vie * Add model 2023-06-26-deberta_embeddings_vie_small_vie * Add model 2023-06-26-deberta_embeddings_tapt_nbme_v3_base_en * Add model 2023-06-26-deberta_embeddings_erlangshen_v2_chinese_sentencepiece_zh * Add model 2023-06-26-deberta_v3_xsmall_en * Add model 2023-06-26-deberta_embeddings_mlm_test_en * Add model 2023-06-26-deberta_v3_small_en * Add model 2023-06-26-roberta_base_swiss_legal_gsw --------- Co-authored-by: ahmedlone127 <ahmedlone127@gmail.com>
- Loading branch information
1 parent
d054074
commit 43ab794
Showing
21 changed files
with
2,910 additions
and
0 deletions.
There are no files selected for viewing
140 changes: 140 additions & 0 deletions
140
...lone127/2023-06-26-deberta_embeddings_erlangshen_v2_chinese_sentencepiece_zh.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,140 @@ | ||
--- | ||
layout: model | ||
title: Chinese Deberta Embeddings Cased model (from IDEA-CCNL) | ||
author: John Snow Labs | ||
name: deberta_embeddings_erlangshen_v2_chinese_sentencepiece | ||
date: 2023-06-26 | ||
tags: [open_source, deberta, deberta_embeddings, debertav2formaskedlm, zh, onnx] | ||
task: Embeddings | ||
language: zh | ||
edition: Spark NLP 5.0.0 | ||
spark_version: 3.0 | ||
supported: true | ||
engine: onnx | ||
annotator: DeBertaEmbeddings | ||
article_header: | ||
type: cover | ||
use_language_switcher: "Python-Scala-Java" | ||
--- | ||
|
||
## Description | ||
|
||
Pretrained DebertaV2ForMaskedLM model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece` is a Chinese model originally trained by `IDEA-CCNL`. | ||
|
||
## Predicted Entities | ||
|
||
|
||
|
||
{:.btn-box} | ||
<button class="button button-orange" disabled>Live Demo</button> | ||
<button class="button button-orange" disabled>Open in Colab</button> | ||
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_embeddings_erlangshen_v2_chinese_sentencepiece_zh_5.0.0_3.0_1687781761029.zip){:.button.button-orange.button-orange-trans.arr.button-icon} | ||
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_embeddings_erlangshen_v2_chinese_sentencepiece_zh_5.0.0_3.0_1687781761029.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} | ||
|
||
## How to use | ||
|
||
<div class="tabs-box" markdown="1"> | ||
{% include programmingLanguageSelectScalaPythonNLU.html %} | ||
|
||
```python | ||
documentAssembler = DocumentAssembler() \ | ||
.setInputCol("text") \ | ||
.setOutputCol("document") | ||
|
||
tokenizer = Tokenizer() \ | ||
.setInputCols("document") \ | ||
.setOutputCol("token") | ||
|
||
embeddings = DeBertaEmbeddings.pretrained("deberta_embeddings_erlangshen_v2_chinese_sentencepiece","zh") \ | ||
.setInputCols(["document", "token"]) \ | ||
.setOutputCol("embeddings") \ | ||
.setCaseSensitive(True) | ||
|
||
pipeline = Pipeline(stages=[documentAssembler, tokenizer, embeddings]) | ||
|
||
data = spark.createDataFrame([["I love Spark-NLP"]]).toDF("text") | ||
|
||
result = pipeline.fit(data).transform(data) | ||
``` | ||
```scala | ||
val documentAssembler = new DocumentAssembler() | ||
.setInputCol("text") | ||
.setOutputCol("document") | ||
|
||
val tokenizer = new Tokenizer() | ||
.setInputCols("document") | ||
.setOutputCol("token") | ||
|
||
val embeddings = DeBertaEmbeddings.pretrained("deberta_embeddings_erlangshen_v2_chinese_sentencepiece","zh") | ||
.setInputCols(Array("document", "token")) | ||
.setOutputCol("embeddings") | ||
.setCaseSensitive(True) | ||
|
||
val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) | ||
|
||
val data = Seq("I love Spark-NLP").toDS.toDF("text") | ||
|
||
val result = pipeline.fit(data).transform(data) | ||
``` | ||
</div> | ||
|
||
{:.model-param} | ||
|
||
<div class="tabs-box" markdown="1"> | ||
{% include programmingLanguageSelectScalaPythonNLU.html %} | ||
```python | ||
documentAssembler = DocumentAssembler() \ | ||
.setInputCol("text") \ | ||
.setOutputCol("document") | ||
|
||
tokenizer = Tokenizer() \ | ||
.setInputCols("document") \ | ||
.setOutputCol("token") | ||
|
||
embeddings = DeBertaEmbeddings.pretrained("deberta_embeddings_erlangshen_v2_chinese_sentencepiece","zh") \ | ||
.setInputCols(["document", "token"]) \ | ||
.setOutputCol("embeddings") \ | ||
.setCaseSensitive(True) | ||
|
||
pipeline = Pipeline(stages=[documentAssembler, tokenizer, embeddings]) | ||
|
||
data = spark.createDataFrame([["I love Spark-NLP"]]).toDF("text") | ||
|
||
result = pipeline.fit(data).transform(data) | ||
``` | ||
```scala | ||
val documentAssembler = new DocumentAssembler() | ||
.setInputCol("text") | ||
.setOutputCol("document") | ||
|
||
val tokenizer = new Tokenizer() | ||
.setInputCols("document") | ||
.setOutputCol("token") | ||
|
||
val embeddings = DeBertaEmbeddings.pretrained("deberta_embeddings_erlangshen_v2_chinese_sentencepiece","zh") | ||
.setInputCols(Array("document", "token")) | ||
.setOutputCol("embeddings") | ||
.setCaseSensitive(True) | ||
|
||
val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) | ||
|
||
val data = Seq("I love Spark-NLP").toDS.toDF("text") | ||
|
||
val result = pipeline.fit(data).transform(data) | ||
``` | ||
</div> | ||
|
||
{:.model-param} | ||
## Model Information | ||
|
||
{:.table-model} | ||
|---|---| | ||
|Model Name:|deberta_embeddings_erlangshen_v2_chinese_sentencepiece| | ||
|Compatibility:|Spark NLP 5.0.0+| | ||
|License:|Open Source| | ||
|Edition:|Official| | ||
|Input Labels:|[sentence, token]| | ||
|Output Labels:|[embeddings]| | ||
|Language:|zh| | ||
|Size:|443.8 MB| | ||
|Case sensitive:|false| |
140 changes: 140 additions & 0 deletions
140
docs/_posts/ahmedlone127/2023-06-26-deberta_embeddings_mlm_test_en.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,140 @@ | ||
--- | ||
layout: model | ||
title: English Deberta Embeddings model (from domenicrosati) | ||
author: John Snow Labs | ||
name: deberta_embeddings_mlm_test | ||
date: 2023-06-26 | ||
tags: [deberta, open_source, deberta_embeddings, debertav2formaskedlm, en, onnx] | ||
task: Embeddings | ||
language: en | ||
edition: Spark NLP 5.0.0 | ||
spark_version: 3.0 | ||
supported: true | ||
engine: onnx | ||
annotator: DeBertaEmbeddings | ||
article_header: | ||
type: cover | ||
use_language_switcher: "Python-Scala-Java" | ||
--- | ||
|
||
## Description | ||
|
||
Pretrained DebertaEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `deberta-mlm-test` is a English model originally trained by `domenicrosati`. | ||
|
||
## Predicted Entities | ||
|
||
|
||
|
||
{:.btn-box} | ||
<button class="button button-orange" disabled>Live Demo</button> | ||
<button class="button button-orange" disabled>Open in Colab</button> | ||
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/deberta_embeddings_mlm_test_en_5.0.0_3.0_1687782209221.zip){:.button.button-orange.button-orange-trans.arr.button-icon} | ||
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/deberta_embeddings_mlm_test_en_5.0.0_3.0_1687782209221.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} | ||
|
||
## How to use | ||
|
||
<div class="tabs-box" markdown="1"> | ||
{% include programmingLanguageSelectScalaPythonNLU.html %} | ||
|
||
```python | ||
documentAssembler = DocumentAssembler() \ | ||
.setInputCol("text") \ | ||
.setOutputCol("document") | ||
|
||
tokenizer = Tokenizer() \ | ||
.setInputCols("document") \ | ||
.setOutputCol("token") | ||
|
||
embeddings = DeBertaEmbeddings.pretrained("deberta_embeddings_mlm_test","en") \ | ||
.setInputCols(["document", "token"]) \ | ||
.setOutputCol("embeddings") \ | ||
.setCaseSensitive(True) | ||
|
||
pipeline = Pipeline(stages=[documentAssembler, tokenizer, embeddings]) | ||
|
||
data = spark.createDataFrame([["I love Spark NLP"]]).toDF("text") | ||
|
||
result = pipeline.fit(data).transform(data) | ||
``` | ||
```scala | ||
val documentAssembler = new DocumentAssembler() | ||
.setInputCol("text") | ||
.setOutputCol("document") | ||
|
||
val tokenizer = new Tokenizer() | ||
.setInputCols("document") | ||
.setOutputCol("token") | ||
|
||
val embeddings = DeBertaEmbeddings.pretrained("deberta_embeddings_mlm_test","en") | ||
.setInputCols(Array("document", "token")) | ||
.setOutputCol("embeddings") | ||
.setCaseSensitive(true) | ||
|
||
val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) | ||
|
||
val data = Seq("I love Spark NLP").toDS.toDF("text") | ||
|
||
val result = pipeline.fit(data).transform(data) | ||
``` | ||
</div> | ||
|
||
{:.model-param} | ||
|
||
<div class="tabs-box" markdown="1"> | ||
{% include programmingLanguageSelectScalaPythonNLU.html %} | ||
```python | ||
documentAssembler = DocumentAssembler() \ | ||
.setInputCol("text") \ | ||
.setOutputCol("document") | ||
|
||
tokenizer = Tokenizer() \ | ||
.setInputCols("document") \ | ||
.setOutputCol("token") | ||
|
||
embeddings = DeBertaEmbeddings.pretrained("deberta_embeddings_mlm_test","en") \ | ||
.setInputCols(["document", "token"]) \ | ||
.setOutputCol("embeddings") \ | ||
.setCaseSensitive(True) | ||
|
||
pipeline = Pipeline(stages=[documentAssembler, tokenizer, embeddings]) | ||
|
||
data = spark.createDataFrame([["I love Spark NLP"]]).toDF("text") | ||
|
||
result = pipeline.fit(data).transform(data) | ||
``` | ||
```scala | ||
val documentAssembler = new DocumentAssembler() | ||
.setInputCol("text") | ||
.setOutputCol("document") | ||
|
||
val tokenizer = new Tokenizer() | ||
.setInputCols("document") | ||
.setOutputCol("token") | ||
|
||
val embeddings = DeBertaEmbeddings.pretrained("deberta_embeddings_mlm_test","en") | ||
.setInputCols(Array("document", "token")) | ||
.setOutputCol("embeddings") | ||
.setCaseSensitive(true) | ||
|
||
val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings)) | ||
|
||
val data = Seq("I love Spark NLP").toDS.toDF("text") | ||
|
||
val result = pipeline.fit(data).transform(data) | ||
``` | ||
</div> | ||
|
||
{:.model-param} | ||
## Model Information | ||
|
||
{:.table-model} | ||
|---|---| | ||
|Model Name:|deberta_embeddings_mlm_test| | ||
|Compatibility:|Spark NLP 5.0.0+| | ||
|License:|Open Source| | ||
|Edition:|Official| | ||
|Input Labels:|[sentence, token]| | ||
|Output Labels:|[embeddings]| | ||
|Language:|en| | ||
|Size:|265.4 MB| | ||
|Case sensitive:|false| |
Oops, something went wrong.