Skip to content

Commit

Permalink
Fixing some 404 errors (#14012)
Browse files Browse the repository at this point in the history
  • Loading branch information
agsfer authored Sep 28, 2023
1 parent 256ceac commit 74b8f23
Show file tree
Hide file tree
Showing 24 changed files with 85 additions and 1,553 deletions.
2 changes: 1 addition & 1 deletion docs/_includes/footer.html
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
{%- include snippets/get-locale-string.html key='COPYRIGHT_DATES' -%}
{%- assign _locale_copyright_dates = __return -%}
© <span id="year"></span> John Snow Labs Inc.
<a href="http://www.johnsnowlabs.com/terms-of-service">Terms of Service</a> | <a href="http://www.johnsnowlabs.com/privacy-policy/">Privacy Policy</a>
<a href="https://www.johnsnowlabs.com/terms-of-service">Terms of Service</a> | <a href="https://www.johnsnowlabs.com/privacy-policy/">Privacy Policy</a>
</div>
</div>
</div>
Expand Down
4 changes: 2 additions & 2 deletions docs/_layouts/landing.html
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@
{%- include snippets/get-nav-url.html path=_section.background_image.src -%}
{%- assign _url = __return -%}
{%- if _section.theme == 'light' -%}
<section class="hero section-row hero--center hero--light" id="hero-{{ forloop.index }}" {%- elsif _section.theme == 'dark' -%}
<section class="hero section-row hero--center hero--dark" id="hero-{{ forloop.index }}" {%- else -%} <section
<section class="hero {{ _section.topclass }} section-row hero--center hero--light" id="hero-{{ forloop.index }}" {%- elsif _section.theme == 'dark' -%}
<section class="hero {{ _section.topclass }} section-row hero--center hero--dark" id="hero-{{ forloop.index }}" {%- else -%} <section
class="hero section-row hero--center" id="hero-{{ forloop.index }}" {%- endif -%} {%- if _section.background_color -%}
style="background-image: url({{ _url }}); background-color: {{ _section.background_color }};">
{%- else -%}
Expand Down
33 changes: 31 additions & 2 deletions docs/_sass/custom.scss
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,10 @@ $color-darkblue: #536B76;
$color-lightblue: #0098DA;
$color-orange: #FF8A00;

body, html, .root, .layout--page {
height: auto;
}

body {
font-size: 16px;
line-height: 24px;
Expand All @@ -22,7 +26,9 @@ body {

html {
scroll-behavior: smooth;
}
}



.article__content {
padding: 0 15px;
Expand Down Expand Up @@ -358,7 +364,27 @@ h2.h2_doc {
margin: 0 auto 10px;
}

.layout--landing {overflow: hidden;}

.hero {
&.bottom_section {
position: relative;
&::after,
&::before {
content: '';
position: absolute;
width: calc((100vw - 1900px)/2);
top: 0;
left: calc((-100vw + 1900px)/2);
display: block;
height: 100%;
background: #0098da;
}
&::after {
left: auto;
right: calc((-100vw + 1900px)/2);
}
}
.h3_title {
font-weight: 800;
font-size: 35px;
Expand Down Expand Up @@ -1740,7 +1766,7 @@ a.btn1.small {
.page__footer {
footer {
background: #fff;
padding: 38px 0;
padding: 38px 15px;
position: relative;
z-index: 999;
border-top: 1px solid #E2F5FE;
Expand Down Expand Up @@ -2867,6 +2893,9 @@ code {
.end_banner {
width: 100%;
}
.has-aside .col-aside > aside {
padding-bottom: 50px;
}
.layout--page--sidebar.layout--page--aside .main {
max-width: 970px;
.article-inner::before {
Expand Down
1,497 changes: 0 additions & 1,497 deletions docs/demo.md

This file was deleted.

2 changes: 1 addition & 1 deletion docs/en/CPUvsGPUbenchmark.md
Original file line number Diff line number Diff line change
Expand Up @@ -241,6 +241,6 @@ Right now, we don't support multigpu training (1 model in different GPUs in para
</div><div class="h3-box" markdown="1">

### Where to look for more information about Training
Please, take a look at the [Spark NLP](https://sparknlp.org/docs/en/training) and [Spark NLP for Healthcare](https://sparknlp.org/docs/en/licensed_training) Training sections, and feel free to reach us out in case you want to maximize the performance on your GPU.
Please, take a look at the [Spark NLP](https://sparknlp.org/docs/en/training) and feel free to reach us out in case you want to maximize the performance on your GPU.

</div>
4 changes: 2 additions & 2 deletions docs/en/annotator_entries/Chunk2Doc.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,11 +68,11 @@ result.selectExpr("explode(chunkConverted)").show(false)
{%- endcapture -%}

{%- capture api_link -%}
[Chunk2Doc](/api/com/johnsnowlabs/nlp/Chunk2Doc)
[Chunk2Doc](/api/com/johnsnowlabs/nlp/annotators/Chunk2Doc)
{%- endcapture -%}

{%- capture python_api_link -%}
[Chunk2Doc](/api/python/reference/autosummary/sparknlp/base/chunk2_doc/index.html#sparknlp.base.chunk2_doc.Chunk2Doc)
[Chunk2Doc](/api/python/reference/autosummary/sparknlp/annotator/chunk2_doc/index.html#sparknlp.base.chunk2_doc.Chunk2Doc)
{%- endcapture -%}

{%- capture source_link -%}
Expand Down
4 changes: 2 additions & 2 deletions docs/en/annotator_entries/SentimentDetector.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The dictionary can be set as a delimited text file.
By default, the sentiment score will be assigned labels `"positive"` if the score is `>= 0`, else `"negative"`.
To retrieve the raw sentiment scores, `enableScore` needs to be set to `true`.

For extended examples of usage, see the [Examples](https://github.com/JohnSnowLabs/spark-nlp/blob/master/examples/python/training/english/dictionary-sentiment/sentiment.ipynb)
For extended examples of usage, see the [Examples](https://github.com/JohnSnowLabs/spark-nlp/blob/master/examples/python/training/english/sentiment-detection/RuleBasedSentiment.ipynb)
and the [SentimentTestSpec](https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/sda/pragmatic/PragmaticSentimentTestSpec.scala).
{%- endcapture -%}

Expand Down Expand Up @@ -49,7 +49,7 @@ The dictionary can be set as a delimited text file.
By default, the sentiment score will be assigned labels `"positive"` if the score is `>= 0`, else `"negative"`.
To retrieve the raw sentiment scores, `enableScore` needs to be set to `true`.

For extended examples of usage, see the [Examples](https://github.com/JohnSnowLabs/spark-nlp/blob/master/examples/python/training/english/dictionary-sentiment/sentiment.ipynb)
For extended examples of usage, see the [Examples](https://github.com/JohnSnowLabs/spark-nlp/blob/master/examples/python/training/english/sentiment-detection/RuleBasedSentiment.ipynb)
and the [SentimentTestSpec](https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/sda/pragmatic/PragmaticSentimentTestSpec.scala).
{%- endcapture -%}

Expand Down
2 changes: 1 addition & 1 deletion docs/en/annotator_entries/Token2Chunk.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ result.selectExpr("explode(chunk) as result").show(false)
{%- endcapture -%}

{%- capture python_api_link -%}
[Token2Chunk](/api/python/reference/autosummary/sparknlp/annotator/token/token2_chunk/index.html#sparknlp.annotator.token.token2_chunk.Token2Chunk)
[Token2Chunk](/api/python/reference/autosummary/sparknlp/base/token2_chunk/index.html#sparknlp.annotator.token.token2_chunk.Token2Chunk)
{%- endcapture -%}

{%- capture source_link -%}
Expand Down
4 changes: 2 additions & 2 deletions docs/en/annotator_entries/ViveknSentiment.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ For training your own model, please see the documentation of that class.
The analyzer requires sentence boundaries to give a score in context.
Tokenization is needed to make sure tokens are within bounds. Transitivity requirements are also required.

For extended examples of usage, see the [Examples](https://github.com/JohnSnowLabs/spark-nlp/blob/master/examples/python/training/english/vivekn-sentiment/VivekNarayanSentimentApproach.ipynb)
For extended examples of usage, see the [Examples](https://github.com/JohnSnowLabs/spark-nlp/blob/master/examples/python/training/english/sentiment-detection/VivekNarayanSentimentApproach.ipynb)
and the [ViveknSentimentTestSpec](https://github.com/JohnSnowLabs/spark-nlp/tree/master/src/test/scala/com/johnsnowlabs/nlp/annotators/sda/vivekn).
{%- endcapture -%}

Expand Down Expand Up @@ -49,7 +49,7 @@ Tokenization is needed to make sure tokens are within bounds. Transitivity requi

The training data needs to consist of a column for normalized text and a label column (either `"positive"` or `"negative"`).

For extended examples of usage, see the [Examples](https://github.com/JohnSnowLabs/spark-nlp/blob/master/examples/python/training/english/vivekn-sentiment/VivekNarayanSentimentApproach.ipynb)
For extended examples of usage, see the [Examples](https://github.com/JohnSnowLabs/spark-nlp/blob/master/examples/python/training/english/sentiment-detection/VivekNarayanSentimentApproach.ipynb)
and the [ViveknSentimentTestSpec](https://github.com/JohnSnowLabs/spark-nlp/tree/master/src/test/scala/com/johnsnowlabs/nlp/annotators/sda/vivekn).
{%- endcapture -%}

Expand Down
2 changes: 1 addition & 1 deletion docs/en/annotator_entries/WordSegmenter.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ The default model is `"wordseg_pku"`, default language is `"zh"`, if no values a
For available pretrained models please see the
[Models Hub](https://sparknlp.org/models?task=Word+Segmentation).

For extended examples of usage, see the [Examples](https://github.com/JohnSnowLabs/spark-nlp/blob/master/jupyter/annotation/chinese/word_segmentation/words_segmenter_demo.ipynb)
For extended examples of usage, see the [Examples](https://github.com/JohnSnowLabs/spark-nlp/blob/master/examples/python/annotation/text/chinese/word_segmentation/words_segmenter_demo.ipynb)
and the [WordSegmenterTest](https://github.com/JohnSnowLabs/spark-nlp/blob/master/src/test/scala/com/johnsnowlabs/nlp/annotators/WordSegmenterTest.scala).

**References:**
Expand Down
52 changes: 26 additions & 26 deletions docs/en/developers.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,19 +31,19 @@ Before you begin, make sure what you have Java and Spark installed in your syste
java -version
```

![Java version](\assets\images\java_version.png)
![Java version](/assets/images/java_version.png)

and

```shell
spark-submit --version
```

![Spark Submit](\assets\images\spark_submit.png)
![Spark Submit](/assets/images/spark_submit.png)

Next step is to open IntelliJ IDEA. On the **Welcome to IntelliJ IDEA** screen you will see ability to **Check out from Version Controle**.

![Idea initial screen](\assets\images\idea_init.png)
![Idea initial screen](/assets/images/idea_init.png)

Log in into your github account in pop up. After select from a list Spark NLP repo url:

Expand All @@ -53,41 +53,41 @@ https://github.com/JohnSnowLabs/spark-nlp

and press *clone* button. If you don't see url in the list, clone or fork repo first to your Github account and try again.

![Idea choose repo](\assets\images\idea_choose_repo.png)
![Idea choose repo](/assets/images/idea_choose_repo.png)

When the repo cloned IDE will detect SBT file with dependencies. Click **Yes** to start import from sbt.

![Pop up build](\assets\images\pop_up_build.png)
![Pop up build](/assets/images/pop_up_build.png)

In the **Import from sbt** pop up make sure you have JDK 8 detected. Click **Ok** to proceed and download required resources.

![Pop up settings build](\assets\images\settings_build.png)
![Pop up settings build](/assets/images/settings_build.png)

If you already had dependences installed you may see the pop up **Not empty folder**, click **Ok** to ignore it and reload resources.

IntelliJ IDEA will be open and it will start syncing SBT project. It make take some time, you will see the progress in the build output panel in the bottom of the screen. To see the project panel in the left press **Alt+1**.

![Idea first screen](\assets\images\idea_first_screen.png)
![Idea first screen](/assets/images/idea_first_screen.png)

Next step is to install Python plugin to the IntelliJ IDEA. To do this, open `File -> Settings -> Plugins`, type `Python` in the search and select Python plugin by JetBrains. Install this plugin by clicking `Install` button.

![Python plugin](\assets\images\python_plugin.png)
![Python plugin](/assets/images/python_plugin.png)

After this steps you can check project structure in the `File -> Project Structure -> Modules`.

![Project structure](\assets\images\project_structure.png)
![Project structure](/assets/images/project_structure.png)

Make sure what you have `spark-nlp` and `spark-nlp-build` folders and no errors in the exported dependencies.

In the `Project` settings check what project SDK is set to 1.8 and in `Platform Settings -> SDK's` you have Java installation as well as Python installation.

![Project settings](\assets\images\project_settings.png)
![Project settings](/assets/images/project_settings.png)

![Platform settings](\assets\images\platform_settings.png)
![Platform settings](/assets/images/platform_settings.png)

If you don't see Python installed in the `SDK's` tab click **+** button, add **Python SDK** with new virtual environment in the project folder with Python 3.x.

![Add python](\assets\images\add_python.png)
![Add python](/assets/images/add_python.png)

</div><div class="h3-box" markdown="1">

Expand All @@ -97,15 +97,15 @@ If you don't see Python installed in the `SDK's` tab click **+** button, add **P

Click **Add configuration** in the Top right corner. In the pop up click on the **+** and look for **sbt task**.

![Add config](\assets\images\add_config.png)
![Add config](/assets/images/add_config.png)

In the **Name** field put `Test`. In the **Tasks** field write down `test`. After you can disable checkbox in **Use sbt shell** to have more custom configurations. In the **VM parameters** increase the memory by changing `-Xmx1024M` to `-Xmx10G` and click **Ok**.

![sbt task](\assets\images\sbt_task.png)
![sbt task](/assets/images/sbt_task.png)

If everything was set up correctly you suhould see unabled green button **Run 'Test'** in the top right. Click on it to start running the tests.

![sbt task](\assets\images\sbt_task_run.png)
![sbt task](/assets/images/sbt_task_run.png)

This algorithm will Run all tests under ``spark-nlp/src/test/scala/com.johnsnowlabs/``

Expand All @@ -123,7 +123,7 @@ Open test file you want to run. For example, ``spark-nlp/src/test/scala/com.john

In the **Tasks** field write down `"testOnly *classpath*"` -> `"testOnly com.johnsnowlabs.nlp.FinisherTestSpec"` and click **Ok** to save individual scala test run configuration.

![individual sbt task](\assets\images\individual_test.png)
![individual sbt task](/assets/images/individual_test.png)

Press **play** button to run individual test.

Expand All @@ -139,32 +139,32 @@ To run tests in debug mode click **Debug** button (next to **play** button). In

To run Python test, first you need to configure project structure. Go to `File -> Project Settings -> Modules`, click on the **+** button and select **New Module**.

![python module add](\assets\images\python_module_add.png)
![python module add](/assets/images/python_module_add.png)

In the pop up choose Python on left menu, select Python SDK from created virtual environment and click **Next**.

![python module pop up](\assets\images\python_module_pop_up.png)
![python module pop up](/assets/images/python_module_pop_up.png)

Enter `python` in the Module name and click **Finish**.

After you need to add Spark dependencies. Select created Python module and click on the **+** button in the Dependencies part.

![python libs add](\assets\images\python_libs_add.png)
![python libs add](/assets/images/python_libs_add.png)

Choose **Jars or directories...** and find the find installation path of spark (usually the folder name is ``spark-2.4.5-bin-hadoop2.7``). In the Spark folder go to the ``python/libs`` and select ``pyspark.zip`` to the project. Do the same for another file in the same folder - ``py4j-0.10.7-src.zip``.

![python libs select](\assets\images\python_libs_select.png)
![python libs attached](\assets\images\python_libs_attached.png)
![python libs select](/assets/images/python_libs_select.png)
![python libs attached](/assets/images/python_libs_attached.png)

All available tests are in ``spark-nlp/python/run-tests.py``. Click **Add configuration** or **Edit configuration** in the Top right corner. In the pop up click on the **+** and look for **Python**.

![python test add](\assets\images\python_test_add.png)
![python test add](/assets/images/python_test_add.png)

In the **Script path** locate file ``spark-nlp/python/run-tests.py``. Also you need to add **SPARK_HOME** environment variable to the project. Choose **Environment variables** and add new variable **SPARK_HOME**. Insert installation path of spark to the Value field.

![python spark home](\assets\images\python_spark_home.png)
![python spark home](/assets/images/python_spark_home.png)

![python test config](\assets\images\python_test_config.png)
![python test config](/assets/images/python_test_config.png)

Click **Ok** to save and close pop up and click **Ok** to confirm new task creation.

Expand All @@ -174,7 +174,7 @@ Before running the tests we need to install requered python dependencies in the
source venv/bin/activate
```

![activate env](\assets\images\activate_env.png)
![activate env](/assets/images/activate_env.png)

after install packages by running

Expand All @@ -191,7 +191,7 @@ Click **Add configuration** or **Edit configuration** in the Top right corner. I

In the **Name** field put `AssemblyCopy`. In the **Tasks** field write down `assemblyAndCopy`. After you can disable checkbox in **Use sbt shell** to have more custom configurations. In the **VM parameters** increase the memory by changing `-Xmx1024M` to `-Xmx6G` and click **Ok**.

![compile jar](\assets\images\compile_jar.png)
![compile jar](/assets/images/compile_jar.png)

You can find created jar in the folder ``spark-nlp/python/lib/sparknlp.jar``

Expand Down
2 changes: 1 addition & 1 deletion docs/en/display.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ The following image gives an example of html output that is obtained for a coupl

### Visualize entity resolution

**Entity resolution** refers to the normalization of named entities predicted by Spark NLP with respect to standard terminologies such as ICD-10, SNOMED, RxNorm etc. You can read more about the available entity resolvers <a href="/en/licensed_annotators#chunkentityresolver">here.</a>
**Entity resolution** refers to the normalization of named entities predicted by Spark NLP with respect to standard terminologies such as ICD-10, SNOMED, RxNorm etc. You can read more about the available entity resolvers <a href="/docs/en/annotators">here.</a>

The **EntityResolverVisualizer** will automatically display on top of the NER label the standard code (ICD10 CM, PCS, ICDO; CPT) that corresponds to that entity as well as the short description of the code. If no resolution code could be identified a regular NER-type of visualization will be displayed.

Expand Down
4 changes: 2 additions & 2 deletions docs/en/mlflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,8 +70,8 @@ Finally, make sure you follow the Spark NLP installation, available [here](https
We are going to use Docker to instantiate a MySQL container with a persistent volume, but you can install it directly on your machine without Docker.

To do that, we will need to have installed (feel free to skip this step if you will install MySql without Docker):
* [Docker](!https://docs.docker.com/engine/install/)
* [Docker-compose](!https://docs.docker.com/compose/install/)
* [Docker](https://docs.docker.com/engine/install/)
* [Docker-compose](https://docs.docker.com/compose/install/)

In our case, I used this `docker-compose.yml` file to instantiate a `mysql` database with a persistent volume:
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ sidebar:

### Highlights

- This version of NLP Server offers support for licensed models and annotators. Users can now upload a Spark NLP for Healthcare license file and get access to a wide range of additional [annotators and transformers](https://sparknlp.org/docs/en/licensed_annotators). A valid license key also gives access to more than [400 state-of-the-art healthcare models](https://sparknlp.org/models?edition=Spark+NLP+for+Healthcare). Those can be used via easy to learn NLU spells or via API calls.
- This version of NLP Server offers support for licensed models and annotators. Users can now upload a Spark NLP for Healthcare license file and get access to a wide range of additional [annotators and transformers](https://sparknlp.org/docs/en/annotators). A valid license key also gives access to more than [400 state-of-the-art healthcare models](https://sparknlp.org/models?edition=Spark+NLP+for+Healthcare). Those can be used via easy to learn NLU spells or via API calls.
- NLP Server now supports better handling of large amounts of data to quickly analyze via UI by offering support for uploading CSV files.
- Support for floating licenses. Users can now take advantage of the floating license flexibility and use those inside of the NLP Server.

Expand Down
3 changes: 1 addition & 2 deletions docs/en/pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,7 @@ sidebar:

Pretrained Pipelines have moved to Models Hub.
Please follow this link for the updated list of all models and pipelines:
[Models Hub](https://sparknlp.org/modelss)
{:.success}
[Models Hub](https://sparknlp.org/models)
{:.success}

</div><div class="h3-box" markdown="1">
Expand Down
Loading

0 comments on commit 74b8f23

Please sign in to comment.