Skip to content

Commit

Permalink
2023-03-01-roberta_token_classifier_large_ontonotes5_la (#13589)
Browse files Browse the repository at this point in the history
* Add model 2023-03-01-roberta_token_classifier_large_ontonotes5_la

* Add model 2023-03-01-roberta_token_classifier_base_ner_demo_mn

* Add model 2023-03-01-roberta_token_classifier_fullstop_catalan_punctuation_prediction_ca

* Add model 2023-03-01-roberta_token_classifier_slovakbert_ner_sk

* Add model 2023-03-01-roberta_token_classifier_bertin_base_pos_conll2002_es

* Add model 2023-03-01-roberta_token_classifier_bertin_base_ner_conll2002_es

* Add model 2023-03-01-roberta_token_classifier_ticker_en

* Add model 2023-03-03-distilbert_token_classifier_ner_roles_openapi_en

* Add model 2023-03-03-distilbert_token_classifier_base_ner_en

* Add model 2023-03-03-distilbert_token_classifier_cpener_test_en

* Add model 2023-03-03-distilbert_token_classifier_base_uncased_ft_conll2003_en

* Add model 2023-03-03-distilbert_token_classifier_keyphrase_extraction_inspec_en

* Add model 2023-03-03-distilbert_token_classifier_base_uncased_finetuned_conll2003_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_job_all_903929564_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_company_all_903429548_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_company_vs_all_902129475_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_final_784824211_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_ner_778023879_en

* Add model 2023-03-03-dtilbert_token_classifier_typo_detector_is

* Add model 2023-03-03-distilbert_token_classifier_icelandic_ner_distilbert_is

* Add model 2023-03-03-distilbert_token_classifier_keyphrase_extraction_kptimes_en

* Add model 2023-03-03-distilbert_token_classifier_keyphrase_extraction_openkp_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_company_all_903429540_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_name_all_904029577_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_name_vsv_all_901529445_en

* Add model 2023-03-03-distilbert_token_classifier_ner_roles_openapi_en

* Add model 2023-03-03-distilbert_token_classifier_base_uncased_ft_conll2003_en

* Add model 2023-03-03-distilbert_token_classifier_keyphrase_extraction_kptimes_en

* Add model 2023-03-03-distilbert_token_classifier_keyphrase_extraction_inspec_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_name_all_904029577_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_name_all_904029569_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_company_all_903429540_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_name_vsv_all_901529445_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_final_784824218_en

* Add model 2023-03-03-dtilbert_token_classifier_typo_detector_is

* Add model 2023-03-03-distilbert_tok_classifier_typo_detector_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_job_all_903929564_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_company_all_903429548_en

* Add model 2023-03-03-distilbert_token_classifier_base_ner_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_final_784824211_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_final_784824209_en

* Add model 2023-03-03-distilbert_token_classifier_keyphrase_extraction_openkp_en

* Add model 2023-03-03-distilbert_token_classifier_cpener_test_en

* Add model 2023-03-03-distilbert_token_classifier_base_uncased_finetuned_conll2003_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_final_784824218_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_ner_778023879_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_company_vs_all_902129475_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_final_784824209_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_name_all_904029569_en

* Add model 2023-03-03-distilbert_tok_classifier_typo_detector_en

* Add model 2023-03-03-distilbert_token_classifier_icelandic_ner_distilbert_is

* Add model 2023-03-03-distilbert_token_classifier_base_ner_en

* Add model 2023-03-03-distilbert_token_classifier_cpener_test_en

* Add model 2023-03-03-distilbert_token_classifier_base_uncased_ft_conll2003_en

* Add model 2023-03-03-distilbert_token_classifier_base_uncased_finetuned_conll2003_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_name_all_904029577_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_name_all_904029569_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_name_vsv_all_901529445_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_final_784824211_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_final_784824218_en

* Add model 2023-03-03-distilbert_token_classifier_ner_roles_openapi_en

* Add model 2023-03-03-distilbert_token_classifier_ner_roles_openapi_en

* Add model 2023-03-03-distilbert_token_classifier_base_uncased_ft_conll2003_en

* Add model 2023-03-03-distilbert_token_classifier_keyphrase_extraction_inspec_en

* Add model 2023-03-03-distilbert_token_classifier_keyphrase_extraction_openkp_en

* Add model 2023-03-03-distilbert_token_classifier_base_uncased_finetuned_conll2003_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_name_all_904029577_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_company_all_903429548_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_final_784824211_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_final_784824218_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_final_784824209_en

* Add model 2023-03-03-dtilbert_token_classifier_typo_detector_is

* Add model 2023-03-03-distilbert_token_classifier_icelandic_ner_distilbert_is

* Add model 2023-03-03-distilbert_token_classifier_autotrain_company_vs_all_902129475_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_company_all_903429548_en

* Add model 2023-03-03-distilbert_token_classifier_keyphrase_extraction_inspec_en

* Add model 2023-03-03-distilbert_token_classifier_keyphrase_extraction_kptimes_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_company_vs_all_902129475_en

* Add model 2023-03-03-distilbert_token_classifier_base_ner_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_name_vsv_all_901529445_en

* Add model 2023-03-03-distilbert_tok_classifier_typo_detector_en

* Add model 2023-03-03-distilbert_token_classifier_keyphrase_extraction_kptimes_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_job_all_903929564_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_name_all_904029569_en

* Add model 2023-03-03-distilbert_token_classifier_cpener_test_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_ner_778023879_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_company_all_903429540_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_job_all_903929564_en

* Add model 2023-03-03-distilbert_token_classifier_keyphrase_extraction_openkp_en

* Add model 2023-03-03-distilbert_token_classifier_autotrain_company_all_903429540_en

---------

Co-authored-by: gokhanturer <mgturer@gmail.com>
  • Loading branch information
jsl-models and gokhanturer authored Mar 5, 2023
1 parent 4d314f0 commit d6b151f
Show file tree
Hide file tree
Showing 29 changed files with 2,865 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
---
layout: model
title: Mongolian RobertaForTokenClassification Base Cased model (from onon214)
author: John Snow Labs
name: roberta_token_classifier_base_ner_demo
date: 2023-03-01
tags: [mn, open_source, roberta, token_classification, ner, tensorflow]
task: Named Entity Recognition
language: mn
edition: Spark NLP 4.3.0
spark_version: 3.0
supported: true
engine: tensorflow
annotator: RoBertaForTokenClassification
article_header:
type: cover
use_language_switcher: "Python-Scala-Java"
---

## Description

Pretrained RobertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `roberta-base-ner-demo` is a Mongolian model originally trained by `onon214`.

## Predicted Entities

`MISC`, `LOC`, `PER`, `ORG`

{:.btn-box}
<button class="button button-orange" disabled>Live Demo</button>
<button class="button button-orange" disabled>Open in Colab</button>
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_token_classifier_base_ner_demo_mn_4.3.0_3.0_1677703536380.zip){:.button.button-orange}
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_token_classifier_base_ner_demo_mn_4.3.0_3.0_1677703536380.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}

## How to use



<div class="tabs-box" markdown="1">
{% include programmingLanguageSelectScalaPythonNLU.html %}
```python
documentAssembler = DocumentAssembler() \
.setInputCols(["text"]) \
.setOutputCols("document")

tokenizer = Tokenizer() \
.setInputCols("document") \
.setOutputCol("token")

tokenClassifier = RobertaForTokenClassification.pretrained("roberta_token_classifier_base_ner_demo","mn") \
.setInputCols(["document", "token"]) \
.setOutputCol("ner")

pipeline = Pipeline(stages=[documentAssembler, tokenizer, tokenClassifier])

data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text")

result = pipeline.fit(data).transform(data)
```
```scala
val documentAssembler = new DocumentAssembler()
.setInputCols(Array("text"))
.setOutputCols(Array("document"))

val tokenizer = new Tokenizer()
.setInputCols("document")
.setOutputCol("token")

val tokenClassifier = RobertaForTokenClassification.pretrained("roberta_token_classifier_base_ner_demo","mn")
.setInputCols(Array("document", "token"))
.setOutputCol("ner")

val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier))

val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text")

val result = pipeline.fit(data).transform(data)
```
</div>

{:.model-param}
## Model Information

{:.table-model}
|---|---|
|Model Name:|roberta_token_classifier_base_ner_demo|
|Compatibility:|Spark NLP 4.3.0+|
|License:|Open Source|
|Edition:|Official|
|Input Labels:|[document, token]|
|Output Labels:|[ner]|
|Language:|mn|
|Size:|466.3 MB|
|Case sensitive:|true|
|Max sentence length:|128|

## References

- https://huggingface.co/onon214/roberta-base-ner-demo
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
---
layout: model
title: Spanish RobertaForTokenClassification Base Cased model (from bertin-project)
author: John Snow Labs
name: roberta_token_classifier_bertin_base_ner_conll2002
date: 2023-03-01
tags: [es, open_source, roberta, token_classification, ner, tensorflow]
task: Named Entity Recognition
language: es
edition: Spark NLP 4.3.0
spark_version: 3.0
supported: true
engine: tensorflow
annotator: RoBertaForTokenClassification
article_header:
type: cover
use_language_switcher: "Python-Scala-Java"
---

## Description

Pretrained RobertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `bertin-base-ner-conll2002-es` is a Spanish model originally trained by `bertin-project`.

## Predicted Entities

`MISC`, `LOC`, `PER`, `ORG`

{:.btn-box}
<button class="button button-orange" disabled>Live Demo</button>
<button class="button button-orange" disabled>Open in Colab</button>
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_token_classifier_bertin_base_ner_conll2002_es_4.3.0_3.0_1677703750308.zip){:.button.button-orange}
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_token_classifier_bertin_base_ner_conll2002_es_4.3.0_3.0_1677703750308.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}

## How to use



<div class="tabs-box" markdown="1">
{% include programmingLanguageSelectScalaPythonNLU.html %}
```python
documentAssembler = DocumentAssembler() \
.setInputCols(["text"]) \
.setOutputCols("document")

tokenizer = Tokenizer() \
.setInputCols("document") \
.setOutputCol("token")

tokenClassifier = RobertaForTokenClassification.pretrained("roberta_token_classifier_bertin_base_ner_conll2002","es") \
.setInputCols(["document", "token"]) \
.setOutputCol("ner")

pipeline = Pipeline(stages=[documentAssembler, tokenizer, tokenClassifier])

data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text")

result = pipeline.fit(data).transform(data)
```
```scala
val documentAssembler = new DocumentAssembler()
.setInputCols(Array("text"))
.setOutputCols(Array("document"))

val tokenizer = new Tokenizer()
.setInputCols("document")
.setOutputCol("token")

val tokenClassifier = RobertaForTokenClassification.pretrained("roberta_token_classifier_bertin_base_ner_conll2002","es")
.setInputCols(Array("document", "token"))
.setOutputCol("ner")

val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier))

val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text")

val result = pipeline.fit(data).transform(data)
```
</div>

{:.model-param}
## Model Information

{:.table-model}
|---|---|
|Model Name:|roberta_token_classifier_bertin_base_ner_conll2002|
|Compatibility:|Spark NLP 4.3.0+|
|License:|Open Source|
|Edition:|Official|
|Input Labels:|[document, token]|
|Output Labels:|[ner]|
|Language:|es|
|Size:|426.2 MB|
|Case sensitive:|true|
|Max sentence length:|128|

## References

- https://huggingface.co/bertin-project/bertin-base-ner-conll2002-es
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
---
layout: model
title: Spanish RobertaForTokenClassification Base Cased model (from bertin-project)
author: John Snow Labs
name: roberta_token_classifier_bertin_base_pos_conll2002
date: 2023-03-01
tags: [es, open_source, roberta, token_classification, ner, tensorflow]
task: Named Entity Recognition
language: es
edition: Spark NLP 4.3.0
spark_version: 3.0
supported: true
engine: tensorflow
annotator: RoBertaForTokenClassification
article_header:
type: cover
use_language_switcher: "Python-Scala-Java"
---

## Description

Pretrained RobertaForTokenClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. `bertin-base-pos-conll2002-es` is a Spanish model originally trained by `bertin-project`.

## Predicted Entities

`DA`, `VAM`, `I`, `VSM`, `PP`, `VSS`, `DI`, `AQ`, `Y`, `VMN`, `Fit`, `Fg`, `Fia`, `Fpa`, `Fat`, `VSN`, `Fpt`, `DD`, `VAP`, `SP`, `NP`, `Fh`, `VAI`, `CC`, `Fd`, `VMG`, `NC`, `PX`, `DE`, `Fz`, `PN`, `Fx`, `Faa`, `Fs`, `Fe`, `VSP`, `DP`, `VAS`, `VSG`, `PT`, `Ft`, `VAN`, `PI`, `P0`, `RG`, `RN`, `CS`, `DN`, `VMI`, `Fp`, `Fc`, `PR`, `VSI`, `AO`, `VMM`, `PD`, `VMS`, `DT`, `Z`, `VMP`

{:.btn-box}
<button class="button button-orange" disabled>Live Demo</button>
<button class="button button-orange" disabled>Open in Colab</button>
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/roberta_token_classifier_bertin_base_pos_conll2002_es_4.3.0_3.0_1677703697571.zip){:.button.button-orange}
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/roberta_token_classifier_bertin_base_pos_conll2002_es_4.3.0_3.0_1677703697571.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}

## How to use



<div class="tabs-box" markdown="1">
{% include programmingLanguageSelectScalaPythonNLU.html %}
```python
documentAssembler = DocumentAssembler() \
.setInputCols(["text"]) \
.setOutputCols("document")

tokenizer = Tokenizer() \
.setInputCols("document") \
.setOutputCol("token")

tokenClassifier = RobertaForTokenClassification.pretrained("roberta_token_classifier_bertin_base_pos_conll2002","es") \
.setInputCols(["document", "token"]) \
.setOutputCol("ner")

pipeline = Pipeline(stages=[documentAssembler, tokenizer, tokenClassifier])

data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text")

result = pipeline.fit(data).transform(data)
```
```scala
val documentAssembler = new DocumentAssembler()
.setInputCols(Array("text"))
.setOutputCols(Array("document"))

val tokenizer = new Tokenizer()
.setInputCols("document")
.setOutputCol("token")

val tokenClassifier = RobertaForTokenClassification.pretrained("roberta_token_classifier_bertin_base_pos_conll2002","es")
.setInputCols(Array("document", "token"))
.setOutputCol("ner")

val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, tokenClassifier))

val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text")

val result = pipeline.fit(data).transform(data)
```
</div>

{:.model-param}
## Model Information

{:.table-model}
|---|---|
|Model Name:|roberta_token_classifier_bertin_base_pos_conll2002|
|Compatibility:|Spark NLP 4.3.0+|
|License:|Open Source|
|Edition:|Official|
|Input Labels:|[document, token]|
|Output Labels:|[ner]|
|Language:|es|
|Size:|426.4 MB|
|Case sensitive:|true|
|Max sentence length:|128|

## References

- https://huggingface.co/bertin-project/bertin-base-pos-conll2002-es
Loading

0 comments on commit d6b151f

Please sign in to comment.