Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2024-10-29-gemma_2_2b_it_iq3_m_en #14446

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
79 commits
Select commit Hold shift + click to select a range
dfa69a2
Add model 2024-10-29-gemma_2_2b_it_iq3_m_en
ahmedlone127 Oct 29, 2024
0a6edde
Add model 2024-10-29-gemma_2_2b_it_iq4_xs_en
ahmedlone127 Oct 29, 2024
331a2d0
Add model 2024-10-29-gemma_2_2b_it_q3_k_l_en
ahmedlone127 Oct 29, 2024
bcafcd8
Add model 2024-10-29-gemma_2_2b_it_q4_k_m_en
ahmedlone127 Oct 29, 2024
db9f60b
Add model 2024-10-29-gemma_2_2b_it_q5_k_m_en
ahmedlone127 Oct 29, 2024
b2344a1
Add model 2024-10-29-gemma_2_2b_it_q6_k_en
ahmedlone127 Oct 29, 2024
9f4d1da
Add model 2024-10-29-gemma_2_2b_it_q8_0_en
ahmedlone127 Oct 29, 2024
f80a9c8
Add model 2024-10-29-llama_3.2_3b_instruct_q3_k_l_xx
ahmedlone127 Oct 29, 2024
2ad80e4
Add model 2024-10-29-llama_3.2_3b_instruct_q4_k_m_xx
ahmedlone127 Oct 29, 2024
09b3b46
Add model 2024-10-29-llama_3.2_3b_instruct_q6_k_xx
ahmedlone127 Oct 29, 2024
120705c
Add model 2024-10-29-llama_3.2_3b_instruct_q8_0_xx
ahmedlone127 Oct 29, 2024
d952f26
Add model 2024-10-29-llama_3.2_1b_instruct_q3_k_l_xx
ahmedlone127 Oct 29, 2024
6bcc35a
Add model 2024-10-29-llama_3.2_1b_instruct_q4_k_m_xx
ahmedlone127 Oct 29, 2024
d315f0e
Add model 2024-10-29-llama_3.2_1b_instruct_q6_k_xx
ahmedlone127 Oct 29, 2024
604596d
Add model 2024-10-29-llama_3.2_1b_instruct_q8_0_xx
ahmedlone127 Oct 29, 2024
5d2dcad
Add model 2024-10-29-mistral_7b_instruct_v0.3_iq3_m_en
ahmedlone127 Oct 29, 2024
9ca180b
Add model 2024-10-29-mistral_7b_instruct_v0.3_q3_k_l_en
ahmedlone127 Oct 29, 2024
4703f14
Add model 2024-10-29-meta_llama_3_8b_instruct_iq3_m_en
ahmedlone127 Oct 29, 2024
1e8069d
Add model 2024-10-29-phi_3.1_mini_4k_instruct_iq3_m_en
ahmedlone127 Oct 29, 2024
813bbfa
Add model 2024-10-29-mathstral_7b_v0.1_iq4_xs_en
ahmedlone127 Oct 29, 2024
98ad4e7
Add model 2024-10-29-mathstral_7b_v0.1_q3_k_l_en
ahmedlone127 Oct 29, 2024
020ecf2
Add model 2024-10-29-qwen2_math_1.5b_instruct_iq4_xs_en
ahmedlone127 Oct 29, 2024
a83dc42
Add model 2024-10-29-qwen2_math_1.5b_instruct_q4_k_m_en
ahmedlone127 Oct 29, 2024
d888ac1
Add model 2024-10-29-qwen2_math_1.5b_instruct_q5_k_m_en
ahmedlone127 Oct 29, 2024
5ad05e8
Add model 2024-10-29-qwen2_math_1.5b_instruct_q6_k_en
ahmedlone127 Oct 29, 2024
1148d3f
Add model 2024-10-29-qwen2_math_1.5b_instruct_q8_0_en
ahmedlone127 Oct 29, 2024
527b6ce
Add model 2024-10-29-yi_coder_1.5b_chat_q4_0_4_4_en
ahmedlone127 Oct 29, 2024
9ac016d
Add model 2024-10-29-yi_coder_1.5b_chat_q4_k_m_en
ahmedlone127 Oct 29, 2024
c8ec364
Add model 2024-10-29-yi_coder_1.5b_chat_q6_k_en
ahmedlone127 Oct 29, 2024
f64939e
Add model 2024-10-29-yi_coder_1.5b_chat_q8_0_en
ahmedlone127 Oct 29, 2024
3aa855e
Add model 2024-10-29-qwen2_500m_instruct_iq4_xs_en
ahmedlone127 Oct 29, 2024
54e3a49
Add model 2024-10-29-qwen2_500m_instruct_q4_k_m_en
ahmedlone127 Oct 29, 2024
2ff6cd7
Add model 2024-10-29-qwen2_500m_instruct_q6_k_en
ahmedlone127 Oct 29, 2024
27390e6
Add model 2024-10-29-qwen2_500m_instruct_q8_0_en
ahmedlone127 Oct 29, 2024
d3c2417
Add model 2024-10-29-qwen2_500m_instruct_q5_k_m_en
ahmedlone127 Oct 29, 2024
fd836aa
Add model 2024-10-29-qwen2_500m_instruct_f32_en
ahmedlone127 Oct 29, 2024
9e41fde
Add model 2024-10-30-qwen2.5_3b_instruct_q3_k_l_en
ahmedlone127 Oct 30, 2024
4325ac1
Add model 2024-10-30-qwen2.5_3b_instruct_q4_k_m_en
ahmedlone127 Oct 30, 2024
7a69e35
Add model 2024-10-30-qwen2.5_3b_instruct_q6_k_en
ahmedlone127 Oct 30, 2024
062ee36
Add model 2024-10-30-qwen2.5_3b_instruct_q8_0_en
ahmedlone127 Oct 30, 2024
3184dfe
Add model 2024-10-30-codellama_7b_kstack_iq3_m_en
ahmedlone127 Oct 30, 2024
b046216
Add model 2024-10-30-meta_llama_3_8b_instruct_iq3_m_en
ahmedlone127 Oct 30, 2024
6b6a3d3
Add model 2024-10-30-qwen2.5_0.5b_instruct_q3_k_l_en
ahmedlone127 Oct 30, 2024
f92889d
Add model 2024-10-30-qwen2.5_0.5b_instruct_q4_k_m_en
ahmedlone127 Oct 30, 2024
847cafb
Add model 2024-10-30-qwen2.5_0.5b_instruct_q6_k_en
ahmedlone127 Oct 30, 2024
bd5b306
Add model 2024-10-30-qwen2.5_0.5b_instruct_q8_0_en
ahmedlone127 Oct 30, 2024
686e2b5
Add model 2024-10-30-qwen2.5_1.5b_instruct_q3_k_l_en
ahmedlone127 Oct 30, 2024
d9a18e0
Add model 2024-10-30-qwen2.5_1.5b_instruct_q4_k_m_en
ahmedlone127 Oct 30, 2024
f244e34
Add model 2024-10-30-qwen2.5_1.5b_instruct_q6_k_en
ahmedlone127 Oct 30, 2024
2308f8b
Add model 2024-10-30-qwen2.5_1.5b_instruct_q8_0_en
ahmedlone127 Oct 30, 2024
fa583b8
Add model 2024-10-30-qwen2.5_coder_1.5b_instruct_q3_k_l_en
ahmedlone127 Oct 30, 2024
38931ca
Add model 2024-10-30-qwen2.5_coder_1.5b_instruct_q4_k_m_en
ahmedlone127 Oct 30, 2024
db545e8
Add model 2024-10-30-qwen2.5_coder_1.5b_instruct_q6_k_en
ahmedlone127 Oct 30, 2024
c4400ad
Add model 2024-10-30-qwen2.5_coder_1.5b_instruct_q8_0_en
ahmedlone127 Oct 30, 2024
0c63197
Add model 2024-10-30-yi_coder_1.5b_q4_0_4_4_en
ahmedlone127 Oct 30, 2024
bc70a47
Add model 2024-10-30-yi_coder_1.5b_q4_k_m_en
ahmedlone127 Oct 30, 2024
b22db37
Add model 2024-10-30-yi_coder_1.5b_q6_k_en
ahmedlone127 Oct 30, 2024
0c795f3
Add model 2024-10-30-yi_coder_1.5b_q8_0_en
ahmedlone127 Oct 30, 2024
7b5d4ae
Add model 2024-10-30-codellama_7b_kstack_clean_iq3_m_en
ahmedlone127 Oct 30, 2024
58d12d5
Add model 2024-10-30-deepseek_coder_6.7b_kexer_iq3_m_en
ahmedlone127 Oct 30, 2024
a302baf
Add model 2024-10-30-yi_1.5_6b_chat_q3_k_l_en
ahmedlone127 Oct 30, 2024
88db5f6
Add model 2024-10-30-yi_1.5_6b_chat_q4_k_m_en
ahmedlone127 Oct 30, 2024
30a74c1
Add model 2024-10-30-alchemistcoder_l_7b_iq4_xs_en
ahmedlone127 Oct 30, 2024
9ebf8e0
Add model 2024-10-30-qwen2.5_math_1.5b_instruct_q3_k_l_en
ahmedlone127 Oct 30, 2024
90d8d23
Add model 2024-10-30-qwen2.5_math_1.5b_instruct_q4_k_m_en
ahmedlone127 Oct 30, 2024
1dc1e72
Add model 2024-10-30-qwen2.5_math_1.5b_instruct_q6_k_en
ahmedlone127 Oct 30, 2024
74e3137
Add model 2024-10-30-qwen2.5_math_1.5b_instruct_q8_0_en
ahmedlone127 Oct 30, 2024
e256dba
Add model 2024-10-30-alchemistcoder_ds_6.7b_iq4_xs_en
ahmedlone127 Oct 30, 2024
f26e041
Add model 2024-10-30-deepseek_coder_1.3b_kexer_iq3_m_en
ahmedlone127 Oct 30, 2024
272c4bc
Add model 2024-10-30-deepseek_coder_1.3b_kexer_q4_k_m_en
ahmedlone127 Oct 30, 2024
d05bdb3
Add model 2024-10-30-deepseek_coder_1.3b_kexer_q6_k_en
ahmedlone127 Oct 30, 2024
a866ab7
Add model 2024-10-30-deepseek_coder_1.3b_kexer_q8_0_en
ahmedlone127 Oct 30, 2024
39128f2
Add model 2024-10-30-internlm2_5_1_8b_chat_iq4_xs_en
ahmedlone127 Oct 30, 2024
4d548fa
Add model 2024-10-30-internlm2_5_1_8b_chat_q3_k_l_en
ahmedlone127 Oct 30, 2024
15ba933
Add model 2024-10-30-internlm2_5_1_8b_chat_q4_k_m_en
ahmedlone127 Oct 30, 2024
fcab545
Add model 2024-10-30-internlm2_5_1_8b_chat_q5_k_m_en
ahmedlone127 Oct 30, 2024
1becbbc
Add model 2024-10-30-internlm2_5_1_8b_chat_q6_k_en
ahmedlone127 Oct 30, 2024
76b1022
Add model 2024-10-30-internlm2_5_1_8b_chat_q8_0_en
ahmedlone127 Oct 30, 2024
61f0b8d
Merge branch 'models_hub' into 2024-10-29-gemma_2_2b_it_iq3_m_en_IqDz…
maziyarpanahi Oct 30, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
101 changes: 101 additions & 0 deletions docs/_posts/ahmedlone127/2024-10-29-gemma_2_2b_it_q5_k_m_en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
---
layout: model
title: English gemma_2_2b_it_q5_k_m AutoGGUFModel from lmstudio-community
author: John Snow Labs
name: gemma_2_2b_it_q5_k_m
date: 2024-10-29
tags: [en, open_source, onnx, conversational, text_generation, text_to_text, llamacpp]
task: Text Generation
language: en
edition: Spark NLP 5.5.1
spark_version: 3.0
supported: true
engine: llamacpp
annotator: AutoGGUFModel
article_header:
type: cover
use_language_switcher: "Python-Scala-Java"
---

## Description

Pretrained AutoGGUFModel model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`gemma_2_2b_it_q5_k_m` is a English model prepared by lmstudio-community.

{:.btn-box}
<button class="button button-orange" disabled>Live Demo</button>
<button class="button button-orange" disabled>Open in Colab</button>
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/gemma_2_2b_it_q5_k_m_en_5.5.1_3.0_1730229529211.zip){:.button.button-orange.button-orange-trans.arr.button-icon}
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/gemma_2_2b_it_q5_k_m_en_5.5.1_3.0_1730229529211.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}

## How to use



<div class="tabs-box" markdown="1">
{% include programmingLanguageSelectScalaPythonNLU.html %}
```python

document = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("document")

autoGGUFModel = AutoGGUFModel.pretrained("gemma_2_2b_it_q5_k_m","en") \
.setInputCols(["document"]) \
.setOutputCol("completions") \
.setBatchSize(4) \
.setNPredict(20) \
.setNGpuLayers(99) \
.setTemperature(0.4) \
.setTopK(40) \
.setTopP(0.9) \
.setPenalizeNl(True)

pipeline = Pipeline().setStages([document, autoGGUFModel])
data = spark.createDataFrame([["Hello, I am a"]]).toDF("text")
result = pipeline.fit(data).transform(data)
result.select("completions").show(truncate = False)

```
```scala

val document = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")

val autoGGUFModel = AutoGGUFModel.pretrained("gemma_2_2b_it_q5_k_m", "en")
.setInputCols("document")
.setOutputCol("completions")
.setBatchSize(4)
.setNPredict(20)
.setNGpuLayers(99)
.setTemperature(0.4f)
.setTopK(40)
.setTopP(0.9f)
.setPenalizeNl(true)

val pipeline = new Pipeline().setStages(Array(document, autoGGUFModel))

val data = Seq("Hello, I am a").toDF("text")
val result = pipeline.fit(data).transform(data)
result.select("completions").show(truncate = false)

```
</div>

{:.model-param}
## Model Information

{:.table-model}
|---|---|
|Model Name:|gemma_2_2b_it_q5_k_m|
|Compatibility:|Spark NLP 5.5.1+|
|License:|Open Source|
|Edition:|Official|
|Input Labels:|[document]|
|Output Labels:|[completions]|
|Language:|en|
|Size:|1.9 GB|

## References

https://huggingface.co/lmstudio-community/gemma-2-2b-it-GGUF
101 changes: 101 additions & 0 deletions docs/_posts/ahmedlone127/2024-10-29-gemma_2_2b_it_q6_k_en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
---
layout: model
title: English gemma_2_2b_it_q6_k AutoGGUFModel from lmstudio-community
author: John Snow Labs
name: gemma_2_2b_it_q6_k
date: 2024-10-29
tags: [en, open_source, onnx, conversational, text_generation, text_to_text, llamacpp]
task: Text Generation
language: en
edition: Spark NLP 5.5.1
spark_version: 3.0
supported: true
engine: llamacpp
annotator: AutoGGUFModel
article_header:
type: cover
use_language_switcher: "Python-Scala-Java"
---

## Description

Pretrained AutoGGUFModel model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`gemma_2_2b_it_q6_k` is a English model prepared by lmstudio-community.

{:.btn-box}
<button class="button button-orange" disabled>Live Demo</button>
<button class="button button-orange" disabled>Open in Colab</button>
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/gemma_2_2b_it_q6_k_en_5.5.1_3.0_1730229619613.zip){:.button.button-orange.button-orange-trans.arr.button-icon}
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/gemma_2_2b_it_q6_k_en_5.5.1_3.0_1730229619613.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}

## How to use



<div class="tabs-box" markdown="1">
{% include programmingLanguageSelectScalaPythonNLU.html %}
```python

document = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("document")

autoGGUFModel = AutoGGUFModel.pretrained("gemma_2_2b_it_q6_k","en") \
.setInputCols(["document"]) \
.setOutputCol("completions") \
.setBatchSize(4) \
.setNPredict(20) \
.setNGpuLayers(99) \
.setTemperature(0.4) \
.setTopK(40) \
.setTopP(0.9) \
.setPenalizeNl(True)

pipeline = Pipeline().setStages([document, autoGGUFModel])
data = spark.createDataFrame([["Hello, I am a"]]).toDF("text")
result = pipeline.fit(data).transform(data)
result.select("completions").show(truncate = False)

```
```scala

val document = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")

val autoGGUFModel = AutoGGUFModel.pretrained("gemma_2_2b_it_q6_k", "en")
.setInputCols("document")
.setOutputCol("completions")
.setBatchSize(4)
.setNPredict(20)
.setNGpuLayers(99)
.setTemperature(0.4f)
.setTopK(40)
.setTopP(0.9f)
.setPenalizeNl(true)

val pipeline = new Pipeline().setStages(Array(document, autoGGUFModel))

val data = Seq("Hello, I am a").toDF("text")
val result = pipeline.fit(data).transform(data)
result.select("completions").show(truncate = false)

```
</div>

{:.model-param}
## Model Information

{:.table-model}
|---|---|
|Model Name:|gemma_2_2b_it_q6_k|
|Compatibility:|Spark NLP 5.5.1+|
|License:|Open Source|
|Edition:|Official|
|Input Labels:|[document]|
|Output Labels:|[completions]|
|Language:|en|
|Size:|2.1 GB|

## References

https://huggingface.co/lmstudio-community/gemma-2-2b-it-GGUF
101 changes: 101 additions & 0 deletions docs/_posts/ahmedlone127/2024-10-29-gemma_2_2b_it_q8_0_en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
---
layout: model
title: English gemma_2_2b_it_q8_0 AutoGGUFModel from lmstudio-community
author: John Snow Labs
name: gemma_2_2b_it_q8_0
date: 2024-10-29
tags: [en, open_source, onnx, conversational, text_generation, text_to_text, llamacpp]
task: Text Generation
language: en
edition: Spark NLP 5.5.1
spark_version: 3.0
supported: true
engine: llamacpp
annotator: AutoGGUFModel
article_header:
type: cover
use_language_switcher: "Python-Scala-Java"
---

## Description

Pretrained AutoGGUFModel model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.`gemma_2_2b_it_q8_0` is a English model prepared by lmstudio-community.

{:.btn-box}
<button class="button button-orange" disabled>Live Demo</button>
<button class="button button-orange" disabled>Open in Colab</button>
[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/gemma_2_2b_it_q8_0_en_5.5.1_3.0_1730229741349.zip){:.button.button-orange.button-orange-trans.arr.button-icon}
[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/gemma_2_2b_it_q8_0_en_5.5.1_3.0_1730229741349.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}

## How to use



<div class="tabs-box" markdown="1">
{% include programmingLanguageSelectScalaPythonNLU.html %}
```python

document = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("document")

autoGGUFModel = AutoGGUFModel.pretrained("gemma_2_2b_it_q8_0","en") \
.setInputCols(["document"]) \
.setOutputCol("completions") \
.setBatchSize(4) \
.setNPredict(20) \
.setNGpuLayers(99) \
.setTemperature(0.4) \
.setTopK(40) \
.setTopP(0.9) \
.setPenalizeNl(True)

pipeline = Pipeline().setStages([document, autoGGUFModel])
data = spark.createDataFrame([["Hello, I am a"]]).toDF("text")
result = pipeline.fit(data).transform(data)
result.select("completions").show(truncate = False)

```
```scala

val document = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")

val autoGGUFModel = AutoGGUFModel.pretrained("gemma_2_2b_it_q8_0", "en")
.setInputCols("document")
.setOutputCol("completions")
.setBatchSize(4)
.setNPredict(20)
.setNGpuLayers(99)
.setTemperature(0.4f)
.setTopK(40)
.setTopP(0.9f)
.setPenalizeNl(true)

val pipeline = new Pipeline().setStages(Array(document, autoGGUFModel))

val data = Seq("Hello, I am a").toDF("text")
val result = pipeline.fit(data).transform(data)
result.select("completions").show(truncate = false)

```
</div>

{:.model-param}
## Model Information

{:.table-model}
|---|---|
|Model Name:|gemma_2_2b_it_q8_0|
|Compatibility:|Spark NLP 5.5.1+|
|License:|Open Source|
|Edition:|Official|
|Input Labels:|[document]|
|Output Labels:|[completions]|
|Language:|en|
|Size:|2.7 GB|

## References

https://huggingface.co/lmstudio-community/gemma-2-2b-it-GGUF
Loading