-
Notifications
You must be signed in to change notification settings - Fork 717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARKNLP-1027] llama.cpp integration #14364
Merged
maziyarpanahi
merged 18 commits into
JohnSnowLabs:release/550-release-candidate
from
DevinTDHa:feature/SPARKNLP-1027-llama-cpp-integration
Sep 5, 2024
Merged
Changes from all commits
Commits
Show all changes
18 commits
Select commit
Hold shift + click to select a range
cb45d79
[SPARKNLP-1027] Initial Tests passing
DevinTDHa 009a91d
[SPARKNLP-1027] Implement Parameters
DevinTDHa 479ef8e
[SPARKNLP-1027] Add metadata to AutoGGUFModel
DevinTDHa 4c0d46a
[SPARKNLP-1027] Scala Side
DevinTDHa 30c1f4a
[SPARKNLP-1027] Initial Python Tests running and parameters fixed
DevinTDHa 320a7fa
[SPARKNLP-1027] AutoGGUFModel can auto-detect GPU
DevinTDHa 499a081
[SPARKNLP-1027] Complete Documentation
DevinTDHa 82f09fb
[SPARKNLP-1027] Add missing parameters
DevinTDHa 00a7904
[SPARKNLP-1027] Add Support for StructFeature setters on python side
DevinTDHa bbcde4d
[SPARKNLP-1027] Add llama.cpp dependencies
DevinTDHa 5850132
[SPARKNLP-1027] getMetadata for Python side
DevinTDHa 94bd5b9
Bump jsl-llamacpp to 0.1.0-rc3
DevinTDHa 3241a68
[SPARKNLP-1027] Exception Handling and Finalize tests
DevinTDHa a96284f
[SPARKNLP-1027] Update jsl-llamacpp version
DevinTDHa 5c3720f
[SPARKNLP-1027] Update Documentation
DevinTDHa a1e9344
Merge branch 'release/550-release-candidate' into feature/SPARKNLP-10…
maziyarpanahi c474d66
Merge branch 'release/550-release-candidate' into feature/SPARKNLP-10…
maziyarpanahi ff20a72
[SPARKNLP-1027] Remove old Parameters
DevinTDHa File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,135 @@ | ||
{%- capture title -%} | ||
AutoGGUFModel | ||
{%- endcapture -%} | ||
|
||
{%- capture description -%} | ||
Annotator that uses the llama.cpp library to generate text completions with large language | ||
models. | ||
|
||
For settable parameters, and their explanations, see [HasLlamaCppProperties](https://github.com/JohnSnowLabs/spark-nlp/tree/master/src/main/scala/com/johnsnowlabs/nlp/HasLlamaCppProperties.scala) and refer to | ||
the llama.cpp documentation of | ||
[server.cpp](https://github.com/ggerganov/llama.cpp/tree/7d5e8777ae1d21af99d4f95be10db4870720da91/examples/server) | ||
for more information. | ||
|
||
If the parameters are not set, the annotator will default to use the parameters provided by | ||
the model. | ||
|
||
Pretrained models can be loaded with `pretrained` of the companion object: | ||
|
||
```scala | ||
val autoGGUFModel = AutoGGUFModel.pretrained() | ||
.setInputCols("document") | ||
.setOutputCol("completions") | ||
``` | ||
|
||
The default model is `"gguf-phi3-mini-4k-instruct-q4"`, if no name is provided. | ||
|
||
For available pretrained models please see the [Models Hub](https://sparknlp.org/models). | ||
|
||
For extended examples of usage, see the | ||
[AutoGGUFModelTest](https://github.com/JohnSnowLabs/spark-nlp/tree/master/src/test/scala/com/johnsnowlabs/nlp/annotators/seq2seq/AutoGGUFModelTest.scala) | ||
and the | ||
[example notebook](https://github.com/JohnSnowLabs/spark-nlp/tree/master/examples/python/llama.cpp/llama.cpp_in_Spark_NLP_AutoGGUFModel.ipynb). | ||
|
||
**Note**: To use GPU inference with this annotator, make sure to use the Spark NLP GPU package and set | ||
the number of GPU layers with the `setNGpuLayers` method. | ||
|
||
When using larger models, we recommend adjusting GPU usage with `setNCtx` and `setNGpuLayers` | ||
according to your hardware to avoid out-of-memory errors. | ||
{%- endcapture -%} | ||
|
||
{%- capture input_anno -%} | ||
DOCUMENT | ||
{%- endcapture -%} | ||
|
||
{%- capture output_anno -%} | ||
DOCUMENT | ||
{%- endcapture -%} | ||
|
||
{%- capture python_example -%} | ||
>>> import sparknlp | ||
>>> from sparknlp.base import * | ||
>>> from sparknlp.annotator import * | ||
>>> from pyspark.ml import Pipeline | ||
>>> document = DocumentAssembler() \ | ||
... .setInputCol("text") \ | ||
... .setOutputCol("document") | ||
>>> autoGGUFModel = AutoGGUFModel.pretrained() \ | ||
... .setInputCols(["document"]) \ | ||
... .setOutputCol("completions") \ | ||
... .setBatchSize(4) \ | ||
... .setNPredict(20) \ | ||
... .setNGpuLayers(99) \ | ||
... .setTemperature(0.4) \ | ||
... .setTopK(40) \ | ||
... .setTopP(0.9) \ | ||
... .setPenalizeNl(True) | ||
>>> pipeline = Pipeline().setStages([document, autoGGUFModel]) | ||
>>> data = spark.createDataFrame([["Hello, I am a"]]).toDF("text") | ||
>>> result = pipeline.fit(data).transform(data) | ||
>>> result.select("completions").show(truncate = False) | ||
+-----------------------------------------------------------------------------------------------------------------------------------+ | ||
|completions | | ||
+-----------------------------------------------------------------------------------------------------------------------------------+ | ||
|[{document, 0, 78, new user. I am currently working on a project and I need to create a list of , {prompt -> Hello, I am a}, []}]| | ||
+-----------------------------------------------------------------------------------------------------------------------------------+ | ||
{%- endcapture -%} | ||
|
||
{%- capture scala_example -%} | ||
import com.johnsnowlabs.nlp.base._ | ||
import com.johnsnowlabs.nlp.annotator._ | ||
import org.apache.spark.ml.Pipeline | ||
import spark.implicits._ | ||
|
||
val document = new DocumentAssembler() | ||
.setInputCol("text") | ||
.setOutputCol("document") | ||
|
||
val autoGGUFModel = AutoGGUFModel | ||
.pretrained() | ||
.setInputCols("document") | ||
.setOutputCol("completions") | ||
.setBatchSize(4) | ||
.setNPredict(20) | ||
.setNGpuLayers(99) | ||
.setTemperature(0.4f) | ||
.setTopK(40) | ||
.setTopP(0.9f) | ||
.setPenalizeNl(true) | ||
|
||
val pipeline = new Pipeline().setStages(Array(document, autoGGUFModel)) | ||
|
||
val data = Seq("Hello, I am a").toDF("text") | ||
val result = pipeline.fit(data).transform(data) | ||
result.select("completions").show(truncate = false) | ||
+-----------------------------------------------------------------------------------------------------------------------------------+ | ||
|completions | | ||
+-----------------------------------------------------------------------------------------------------------------------------------+ | ||
|[{document, 0, 78, new user. I am currently working on a project and I need to create a list of , {prompt -> Hello, I am a}, []}]| | ||
+-----------------------------------------------------------------------------------------------------------------------------------+ | ||
|
||
{%- endcapture -%} | ||
|
||
{%- capture api_link -%} | ||
[AutoGGUFModel](/api/com/johnsnowlabs/nlp/annotators/seq2seq/AutoGGUFModel) | ||
{%- endcapture -%} | ||
|
||
{%- capture python_api_link -%} | ||
[AutoGGUFModel](/api/python/reference/autosummary/sparknlp/annotator/seq2seq/auto_gguf_model/index.html) | ||
{%- endcapture -%} | ||
|
||
{%- capture source_link -%} | ||
[AutoGGUFModel](https://github.com/JohnSnowLabs/spark-nlp/tree/master/src/main/scala/com/johnsnowlabs/nlp/annotators/seq2seq/AutoGGUFModel.scala) | ||
{%- endcapture -%} | ||
|
||
{% include templates/anno_template.md | ||
title=title | ||
description=description | ||
input_anno=input_anno | ||
output_anno=output_anno | ||
python_example=python_example | ||
scala_example=scala_example | ||
api_link=api_link | ||
python_api_link=python_api_link | ||
source_link=source_link | ||
%} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DevinTDHa We don't need a special build for
aarch64
or it's not supported?