Skip to content

Commit

Permalink
2023-02-20-bert_embeddings_pretrain_ko (#13548)
Browse files Browse the repository at this point in the history
* Add model 2023-02-20-bert_embeddings_pretrain_ko

* Update 2023-02-20-bert_embeddings_pretrain_ko.md

* Update 2023-02-20-bert_embeddings_pretrain_ko.md

* Add model 2023-02-20-bert_embeddings_base_uncased_issues_128_en

* Update 2023-02-20-bert_embeddings_base_uncased_issues_128_en.md

* Add model 2023-02-21-chemical_uncased_finetuned_cust_c2_en

* Update 2023-02-21-chemical_uncased_finetuned_cust_c2_en.md

* Delete 2023-02-21-chemical_uncased_finetuned_cust_c2_en.md

* Add model 2023-02-21-bert_embeddings_chemical_uncased_finetuned_cust_c2_en

* Update 2023-02-21-bert_embeddings_chemical_uncased_finetuned_cust_c2_en.md

* Add model 2023-02-21-bert_embeddings_olm_base_uncased_oct_2022_en

* Update 2023-02-21-bert_embeddings_olm_base_uncased_oct_2022_en.md

* Add model 2023-02-21-bert_embeddings_chemical_uncased_finetuned_cust_c1_cust_en

* Update 2023-02-21-bert_embeddings_chemical_uncased_finetuned_cust_c1_cust_en.md

* Add model 2023-02-22-bert_embeddings_carlbert_webex_mlm_spatial_en

* Update 2023-02-22-bert_embeddings_carlbert_webex_mlm_spatial_en.md

* Add model 2023-02-22-bert_embeddings_distil_clinical_en

* Update 2023-02-22-bert_embeddings_distil_clinical_en.md

* Add model 2023-02-23-distilbert_embeddings_base_multilingual_cased_xx

* Update 2023-02-23-distilbert_embeddings_base_multilingual_cased_xx.md

* Add model 2023-02-23-deberta_embeddings_erlangshen_v2_chinese_sentencepiece_zh

* Update 2023-02-23-deberta_embeddings_erlangshen_v2_chinese_sentencepiece_zh.md

---------

Co-authored-by: Mary-Sci <meryemyildiz366@gmail.com>
Co-authored-by: Merve Ertas Uslu <67653613+Mary-Sci@users.noreply.github.com>
  • Loading branch information
3 people authored Feb 24, 2023
1 parent 0347607 commit 0c314ef
Show file tree
Hide file tree
Showing 9 changed files with 865 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
---
layout: model
title: English Bert Embeddings Cased model (from antoinev17)
author: John Snow Labs
name: bert_embeddings_base_uncased_issues_128
date: 2023-02-20
tags: [open_source, bert, bert_embeddings, bertformaskedlm, en, tensorflow]
task: Embeddings
language: en
edition: Spark NLP 4.3.0
spark_version: 3.0
supported: true
engine: tensorflow
annotator: BertEmbeddings
article_header:
type: cover
use_language_switcher: "Python-Scala-Java"
---

## Description

Pretrained BertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `bert-base-uncased-issues-128` is a English model originally trained by `antoinev17
`.

{:.btn-box}
<button class="button button-orange" disabled>Live Demo</button>
<button class="button button-orange" disabled>Open in Colab</button>
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_embeddings_base_uncased_issues_128_en_4.3.0_3.0_1676927301180.zip){:.button.button-orange}
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_embeddings_base_uncased_issues_128_en_4.3.0_3.0_1676927301180.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}

## How to use



<div class="tabs-box" markdown="1">
{% include programmingLanguageSelectScalaPythonNLU.html %}

```python
documentAssembler = DocumentAssembler() \
.setInputCols(["text"]) \
.setOutputCols("document")

tokenizer = Tokenizer() \
.setInputCols("document") \
.setOutputCol("token")

embeddings = BertEmbeddings.pretrained("bert_embeddings_base_uncased_issues_128","ko") \
.setInputCols(["document", "token"]) \
.setOutputCol("embeddings") \
.setCaseSensitive(True)

pipeline = Pipeline(stages=[documentAssembler, tokenizer, embeddings])

data = spark.createDataFrame([["I love Spark NLP"]]).toDF("text")

result = pipeline.fit(data).transform(data)
```
```scala
val documentAssembler = new DocumentAssembler()
.setInputCols(Array("text"))
.setOutputCols(Array("document"))

val tokenizer = new Tokenizer()
.setInputCols("document")
.setOutputCol("token")

val embeddings = BertEmbeddings.pretrained("bert_embeddings_base_uncased_issues_128","ko")
.setInputCols(Array("document", "token"))
.setOutputCol("embeddings")
.setCaseSensitive(True)

val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings))

val data = Seq("I love Spark NLP").toDS.toDF("text")

val result = pipeline.fit(data).transform(data)
```
</div>

{:.model-param}
## Model Information

{:.table-model}
|---|---|
|Model Name:|bert_embeddings_base_uncased_issues_128|
|Compatibility:|Spark NLP 4.3.0+|
|License:|Open Source|
|Edition:|Official|
|Input Labels:|[sentence, token]|
|Output Labels:|[bert]|
|Language:|en|
|Size:|410.1 MB|
|Case sensitive:|true|

## References

https://huggingface.co/antoinev17/bert-base-uncased-issues-128
96 changes: 96 additions & 0 deletions docs/_posts/Mary-Sci/2023-02-20-bert_embeddings_pretrain_ko.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
---
layout: model
title: Korean Bert Embeddings Cased model (from onlydj96)
author: John Snow Labs
name: bert_embeddings_pretrain
date: 2023-02-20
tags: [open_source, bert, bert_embeddings, bertformaskedlm, ko, tensorflow]
task: Embeddings
language: ko
edition: Spark NLP 4.3.0
spark_version: 3.0
supported: true
engine: tensorflow
annotator: BertEmbeddings
article_header:
type: cover
use_language_switcher: "Python-Scala-Java"
---

## Description

Pretrained BertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `bert_pretrain` is a Korean model originally trained by `onlydj96`.

{:.btn-box}
<button class="button button-orange" disabled>Live Demo</button>
<button class="button button-orange" disabled>Open in Colab</button>
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_embeddings_pretrain_ko_4.3.0_3.0_1676925661631.zip){:.button.button-orange}
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_embeddings_pretrain_ko_4.3.0_3.0_1676925661631.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}

## How to use



<div class="tabs-box" markdown="1">
{% include programmingLanguageSelectScalaPythonNLU.html %}

```python
documentAssembler = DocumentAssembler() \
.setInputCols(["text"]) \
.setOutputCols("document")

tokenizer = Tokenizer() \
.setInputCols("document") \
.setOutputCol("token")

embeddings = BertEmbeddings.pretrained("bert_embeddings_pretrain","ko") \
.setInputCols(["document", "token"]) \
.setOutputCol("embeddings") \
.setCaseSensitive(True)

pipeline = Pipeline(stages=[documentAssembler, tokenizer, embeddings])

data = spark.createDataFrame([["I love Spark NLP"]]).toDF("text")

result = pipeline.fit(data).transform(data)
```
```scala
val documentAssembler = new DocumentAssembler()
.setInputCols(Array("text"))
.setOutputCols(Array("document"))

val tokenizer = new Tokenizer()
.setInputCols("document")
.setOutputCol("token")

val embeddings = BertEmbeddings.pretrained("bert_embeddings_pretrain","ko")
.setInputCols(Array("document", "token"))
.setOutputCol("embeddings")
.setCaseSensitive(true)

val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings))

val data = Seq("I love Spark NLP").toDS.toDF("text")

val result = pipeline.fit(data).transform(data)
```
</div>

{:.model-param}
## Model Information

{:.table-model}
|---|---|
|Model Name:|bert_embeddings_pretrain|
|Compatibility:|Spark NLP 4.3.0+|
|License:|Open Source|
|Edition:|Official|
|Input Labels:|[sentence, token]|
|Output Labels:|[bert]|
|Language:|ko|
|Size:|415.5 MB|
|Case sensitive:|true|

## References

https://huggingface.co/onlydj96/bert_pretrain
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
---
layout: model
title: English Bert Embeddings Cased model (from Shafin)
author: John Snow Labs
name: bert_embeddings_chemical_uncased_finetuned_cust_c1_cust
date: 2023-02-21
tags: [open_source, bert, bert_embeddings, bertformaskedlm, en, tensorflow]
task: Embeddings
language: en
edition: Spark NLP 4.3.0
spark_version: 3.0
supported: true
engine: tensorflow
annotator: BertEmbeddings
article_header:
type: cover
use_language_switcher: "Python-Scala-Java"
---

## Description

Pretrained BertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `chemical-bert-uncased-finetuned-cust-c1-cust` is a English model originally trained by `Shafin`.

{:.btn-box}
<button class="button button-orange" disabled>Live Demo</button>
<button class="button button-orange" disabled>Open in Colab</button>
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_embeddings_chemical_uncased_finetuned_cust_c1_cust_en_4.3.0_3.0_1677001598364.zip){:.button.button-orange}
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_embeddings_chemical_uncased_finetuned_cust_c1_cust_en_4.3.0_3.0_1677001598364.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}

## How to use



<div class="tabs-box" markdown="1">
{% include programmingLanguageSelectScalaPythonNLU.html %}

```python
documentAssembler = DocumentAssembler() \
.setInputCols(["text"]) \
.setOutputCols("document")

tokenizer = Tokenizer() \
.setInputCols("document") \
.setOutputCol("token")

embeddings = BertEmbeddings.pretrained("bert_embeddings_chemical_uncased_finetuned_cust_c1_cust","en") \
.setInputCols(["document", "token"]) \
.setOutputCol("embeddings") \
.setCaseSensitive(True)

pipeline = Pipeline(stages=[documentAssembler, tokenizer, embeddings])

data = spark.createDataFrame([["I love Spark NLP"]]).toDF("text")

result = pipeline.fit(data).transform(data)
```
```scala
val documentAssembler = new DocumentAssembler()
.setInputCols(Array("text"))
.setOutputCols(Array("document"))

val tokenizer = new Tokenizer()
.setInputCols("document")
.setOutputCol("token")

val embeddings = BertEmbeddings.pretrained("bert_embeddings_chemical_uncased_finetuned_cust_c1_cust","en")
.setInputCols(Array("document", "token"))
.setOutputCol("embeddings")
.setCaseSensitive(True)

val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings))

val data = Seq("I love Spark NLP").toDS.toDF("text")

val result = pipeline.fit(data).transform(data)
```
</div>

{:.model-param}
## Model Information

{:.table-model}
|---|---|
|Model Name:|bert_embeddings_chemical_uncased_finetuned_cust_c1_cust|
|Compatibility:|Spark NLP 4.3.0+|
|License:|Open Source|
|Edition:|Official|
|Input Labels:|[sentence, token]|
|Output Labels:|[bert]|
|Language:|en|
|Size:|412.1 MB|
|Case sensitive:|true|

## References

https://huggingface.co/shafin/chemical-bert-uncased-finetuned-cust-c1-cust
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
---
layout: model
title: English Bert Embeddings Cased model (from Shafin)
author: John Snow Labs
name: bert_embeddings_chemical_uncased_finetuned_cust_c2
date: 2023-02-21
tags: [open_source, bert, bert_embeddings, bertformaskedlm, en, tensorflow]
task: Embeddings
language: en
edition: Spark NLP 4.3.0
spark_version: 3.0
supported: true
engine: tensorflow
annotator: BertEmbeddings
article_header:
type: cover
use_language_switcher: "Python-Scala-Java"
---

## Description

Pretrained BertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `chemical-bert-uncased-finetuned-cust-c2` is a English model originally trained by `shafin`.

{:.btn-box}
<button class="button button-orange" disabled>Live Demo</button>
<button class="button button-orange" disabled>Open in Colab</button>
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bert_embeddings_chemical_uncased_finetuned_cust_c2_en_4.3.0_3.0_1676998811176.zip){:.button.button-orange}
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bert_embeddings_chemical_uncased_finetuned_cust_c2_en_4.3.0_3.0_1676998811176.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}

## How to use



<div class="tabs-box" markdown="1">
{% include programmingLanguageSelectScalaPythonNLU.html %}

```python
documentAssembler = DocumentAssembler() \
.setInputCols(["text"]) \
.setOutputCols("document")

tokenizer = Tokenizer() \
.setInputCols("document") \
.setOutputCol("token")

embeddings = BertEmbeddings.pretrained("bert_embeddings_chemical_uncased_finetuned_cust_c2","en") \
.setInputCols(["document", "token"]) \
.setOutputCol("embeddings") \
.setCaseSensitive(True)

pipeline = Pipeline(stages=[documentAssembler, tokenizer, embeddings])

data = spark.createDataFrame([["I love Spark NLP"]]).toDF("text")

result = pipeline.fit(data).transform(data)
```
```scala
val documentAssembler = new DocumentAssembler()
.setInputCols(Array("text"))
.setOutputCols(Array("document"))

val tokenizer = new Tokenizer()
.setInputCols("document")
.setOutputCol("token")

val embeddings = BertEmbeddings.pretrained("bert_embeddings_chemical_uncased_finetuned_cust_c2","en")
.setInputCols(Array("document", "token"))
.setOutputCol("embeddings")
.setCaseSensitive(True)

val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, embeddings))

val data = Seq("I love Spark NLP").toDS.toDF("text")

val result = pipeline.fit(data).transform(data)
```
</div>

{:.model-param}
## Model Information

{:.table-model}
|---|---|
|Model Name:|bert_embeddings_chemical_uncased_finetuned_cust_c2|
|Compatibility:|Spark NLP 4.3.0+|
|License:|Open Source|
|Edition:|Official|
|Input Labels:|[sentence, token]|
|Output Labels:|[bert]|
|Language:|en|
|Size:|412.1 MB|
|Case sensitive:|true|

## References

https://huggingface.co/shafin/chemical-bert-uncased-finetuned-cust-c2
Loading

0 comments on commit 0c314ef

Please sign in to comment.