From 0cc970a962f84b3222710467e7171c5e690a0e02 Mon Sep 17 00:00:00 2001
From: ahmedlone127
process extensive textual input, expanding its utility in handling more complex tasks.
In summary, Mistral 7B represents a notable advancement in language models, offering a reliable and versatile solution for various natural language processing challenges.
Pretrained models can be loaded with pretrained
of the companion object:
val mistral = MistralTransformer.pretrained() .setInputCols("document") - .setOutputCol("generation")
The default model is "mistral-7b"
, if no name is provided. For available pretrained models
+ .setOutputCol("generation")
The default model is "mistral_7b"
, if no name is provided. For available pretrained models
please see the Models Hub.
For extended examples of usage, see -MistralTestSpec.
References:
Paper Abstract:
We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered for superior +MistralTestSpec.
References:
Paper Abstract:
We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered for superior
performance and efficiency. Mistral 7B outperforms Llama 2 13B across all evaluated
benchmarks, and Llama 1 34B in reasoning, mathematics, and code generation. Our model
leverages grouped-query attention (GQA) for faster inference, coupled with sliding window
@@ -305,7 +305,7 @@
.setInputCol("text")
.setOutputCol("documents")
-val mistral = MistralTransformer.pretrained("mistral-7b")
+val mistral = MistralTransformer.pretrained("mistral_7b")
.setInputCols(Array("documents"))
.setMinOutputLength(10)
.setMaxOutputLength(50)
diff --git a/docs/api/com/johnsnowlabs/nlp/annotators/seq2seq/Phi2Transformer.html b/docs/api/com/johnsnowlabs/nlp/annotators/seq2seq/Phi2Transformer.html
index 66127d9bf656bb..4f5409c45443cb 100644
--- a/docs/api/com/johnsnowlabs/nlp/annotators/seq2seq/Phi2Transformer.html
+++ b/docs/api/com/johnsnowlabs/nlp/annotators/seq2seq/Phi2Transformer.html
@@ -311,7 +311,7 @@
.setInputCol("text")
.setOutputCol("documents")
-val Phi2 = Phi2Transformer.pretrained("Phi2-7b")
+val Phi2 = Phi2Transformer.pretrained("phi2_7b")
.setInputCols(Array("documents"))
.setMinOutputLength(10)
.setMaxOutputLength(50)
diff --git a/docs/api/com/johnsnowlabs/nlp/annotators/seq2seq/index.html b/docs/api/com/johnsnowlabs/nlp/annotators/seq2seq/index.html
index c63e37ee760c8f..29e29cd55d177e 100644
--- a/docs/api/com/johnsnowlabs/nlp/annotators/seq2seq/index.html
+++ b/docs/api/com/johnsnowlabs/nlp/annotators/seq2seq/index.html
@@ -1040,9 +1040,9 @@
Type Members
process extensive textual input, expanding its utility in handling more complex tasks.
In summary, Mistral 7B represents a notable advancement in language models, offering a reliable and versatile solution for various natural language processing challenges.
Pretrained models can be loaded with pretrained
of the companion object:
val mistral = MistralTransformer.pretrained() .setInputCols("document") - .setOutputCol("generation")
The default model is "mistral-7b"
, if no name is provided. For available pretrained models
+ .setOutputCol("generation")
The default model is "mistral_7b"
, if no name is provided. For available pretrained models
please see the Models Hub.
For extended examples of usage, see -MistralTestSpec.
References:
Paper Abstract:
We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered for superior +MistralTestSpec.
References:
Paper Abstract:
We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered for superior
performance and efficiency. Mistral 7B outperforms Llama 2 13B across all evaluated
benchmarks, and Llama 1 34B in reasoning, mathematics, and code generation. Our model
leverages grouped-query attention (GQA) for faster inference, coupled with sliding window
@@ -1059,7 +1059,7 @@ The default model is The default model is Type Members
.setInputCol("text")
.setOutputCol("documents")
-val mistral = MistralTransformer.pretrained("mistral-7b")
+val mistral = MistralTransformer.pretrained("mistral_7b")
.setInputCols(Array("documents"))
.setMinOutputLength(10)
.setMaxOutputLength(50)
@@ -1134,7 +1134,7 @@ Type Members
.setInputCol("text")
.setOutputCol("documents")
-val Phi2 = Phi2Transformer.pretrained("Phi2-7b")
+val Phi2 = Phi2Transformer.pretrained("phi2_7b")
.setInputCols(Array("documents"))
.setMinOutputLength(10)
.setMaxOutputLength(50)
diff --git a/docs/api/python/modules/sparknlp/annotator/seq2seq/mistral_transformer.html b/docs/api/python/modules/sparknlp/annotator/seq2seq/mistral_transformer.html
index 4456d2a7d70a0a..d73a77c9461133 100644
--- a/docs/api/python/modules/sparknlp/annotator/seq2seq/mistral_transformer.html
+++ b/docs/api/python/modules/sparknlp/annotator/seq2seq/mistral_transformer.html
@@ -387,7 +387,7 @@ Source code for sparknlp.annotator.seq2seq.mistral_transformer
Source code for sparknlp.annotator.seq2seq.mistral_transformer
Source code for sparknlp.annotator.seq2seq.mistral_transformer
Source code for sparknlp.annotator.seq2seq.mistral_transformer
Source code for sparknlp.annotator.seq2seq.phi2_transformer
Source code for sparknlp.annotator.seq2seq.phi2_transformer
Classes... .setOutputCol("generation")
"mistral-7b"
, if no name is provided. For available
+"mistral_7b"
, if no name is provided. For available
pretrained models please see the Models Hub.