Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync finance with master #13467

Merged
merged 24 commits into from
Feb 4, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
ca470cf
Updates doc with FINLEG 1.6.0
Jan 23, 2023
a164fb3
Fixed some md files (#13400)
Damla-Gurbaz Jan 24, 2023
0a770d9
Uptade ocr cards (#13407)
aymanechilah Jan 24, 2023
3ed9aea
428 release candidate (#13406)
maziyarpanahi Jan 24, 2023
9922f40
update scala (#13412)
aymanechilah Jan 24, 2023
973613b
remove {{ (#13413)
aymanechilah Jan 24, 2023
259a8c3
Upd tab2 (#13416)
aymanechilah Jan 25, 2023
bdd628b
add tabs fix (#13417)
agsfer Jan 26, 2023
99347d9
4.2.8 Release Note Update (#13422)
Cabir40 Jan 26, 2023
5101f76
Update quickstart.md
maziyarpanahi Jan 28, 2023
0e784ee
[skip ci] Commit to branch 4.2.8-healthcare-docs-f7c11b955d888f94fe5e…
jsl-builder Jan 28, 2023
9a098c5
Delete 2022-07-07-bert_qa_legal_en_3_0.md
josejuanmartinez Jan 30, 2023
d142547
Delete 2022-06-20-roberta_qa_legal_qa_en_3_0.md
josejuanmartinez Jan 30, 2023
e720149
Delete 2022-06-06-bert_qa_bert_large_question_answering_finetuned_leg…
josejuanmartinez Jan 30, 2023
d1f6411
Delete 2022-12-02-roberta_qa_base_cuad_finetuned_en.md
josejuanmartinez Jan 30, 2023
c20521f
Delete 2022-12-02-roberta_qa_marshmellow77_base_cuad_en.md
josejuanmartinez Jan 30, 2023
bad1bf3
Delete 2022-06-20-roberta_qa_marshmellow77_roberta_base_cuad_en_3_0.md
josejuanmartinez Jan 30, 2023
d5800f0
Delete 2022-06-20-roberta_qa_roberta_base_on_cuad_en_3_0.md
josejuanmartinez Jan 30, 2023
cc4466e
Delete 2022-12-02-roberta_qa_base_on_cuad_en.md
josejuanmartinez Jan 30, 2023
75fa450
update image path (#13439)
agsfer Jan 30, 2023
2265438
Update Input output (#13451)
agsfer Feb 1, 2023
e882ee3
Fix the typo in docs
maziyarpanahi Feb 1, 2023
1fe557b
Update DeIdentification.md (#13449)
Meryem1425 Feb 1, 2023
6fbc5ca
rename some demos (#13458)
agsfer Feb 2, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
10 changes: 10 additions & 0 deletions CHANGELOG
Original file line number Diff line number Diff line change
@@ -1,3 +1,13 @@
========
4.2.8
========
----------------
Bug Fixes & Enhancements
----------------
* Fix the issue with optional keys (labels) in metadata when using XXXForSequenceClassitication annotators. This fixes `Some(neg) -> 0.13602075` as `neg -> 0.13602075` to be in harmony with all the other classifiers. https://github.com/JohnSnowLabs/spark-nlp/pull/13396
* Introducing a config to skip `LightPipeline` validation for `inputCols` on the Python side for projects depending on Spark NLP. This toggle should only be used for specific annotators that do not follow the convention of predefined `inputAnnotatorTypes` and `outputAnnotatorType`.


========
4.2.7
========
Expand Down
88 changes: 44 additions & 44 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ To use Spark NLP you need the following requirements:

**GPU (optional):**

Spark NLP 4.2.7 is built with TensorFlow 2.7.1 and the following NVIDIA® software are only required for GPU support:
Spark NLP 4.2.8 is built with TensorFlow 2.7.1 and the following NVIDIA® software are only required for GPU support:

- NVIDIA® GPU drivers version 450.80.02 or higher
- CUDA® Toolkit 11.2
Expand All @@ -168,7 +168,7 @@ $ java -version
$ conda create -n sparknlp python=3.7 -y
$ conda activate sparknlp
# spark-nlp by default is based on pyspark 3.x
$ pip install spark-nlp==4.2.7 pyspark==3.2.3
$ pip install spark-nlp==4.2.8 pyspark==3.2.3
```

In Python console or Jupyter `Python3` kernel:
Expand Down Expand Up @@ -213,7 +213,7 @@ For more examples, you can visit our dedicated [repository](https://github.com/J

## Apache Spark Support

Spark NLP *4.2.7* has been built on top of Apache Spark 3.2 while fully supports Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:
Spark NLP *4.2.8* has been built on top of Apache Spark 3.2 while fully supports Apache Spark 3.0.x, 3.1.x, 3.2.x, and 3.3.x:

| Spark NLP | Apache Spark 2.3.x | Apache Spark 2.4.x | Apache Spark 3.0.x | Apache Spark 3.1.x | Apache Spark 3.2.x | Apache Spark 3.3.x |
|-----------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
Expand Down Expand Up @@ -247,7 +247,7 @@ Find out more about `Spark NLP` versions from our [release notes](https://github

## Databricks Support

Spark NLP 4.2.7 has been tested and is compatible with the following runtimes:
Spark NLP 4.2.8 has been tested and is compatible with the following runtimes:

**CPU:**

Expand Down Expand Up @@ -291,7 +291,7 @@ NOTE: Spark NLP 4.0.x is based on TensorFlow 2.7.x which is compatible with CUDA

## EMR Support

Spark NLP 4.2.7 has been tested and is compatible with the following EMR releases:
Spark NLP 4.2.8 has been tested and is compatible with the following EMR releases:

- emr-6.2.0
- emr-6.3.0
Expand Down Expand Up @@ -329,23 +329,23 @@ Spark NLP supports all major releases of Apache Spark 3.0.x, Apache Spark 3.1.x,
```sh
# CPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8

spark-submit --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7
spark-submit --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8
```

The `spark-nlp` has been published to the [Maven Repository](https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp).

```sh
# GPU

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.7
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.8

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.7
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.8

spark-submit --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.7
spark-submit --packages com.johnsnowlabs.nlp:spark-nlp-gpu_2.12:4.2.8

```

Expand All @@ -354,11 +354,11 @@ The `spark-nlp-gpu` has been published to the [Maven Repository](https://mvnrepo
```sh
# AArch64

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.7
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.8

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.7
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.8

spark-submit --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.7
spark-submit --packages com.johnsnowlabs.nlp:spark-nlp-aarch64_2.12:4.2.8

```

Expand All @@ -367,11 +367,11 @@ The `spark-nlp-aarch64` has been published to the [Maven Repository](https://mvn
```sh
# M1

spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.7
spark-shell --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.8

pyspark --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.7
pyspark --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.8

spark-submit --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.7
spark-submit --packages com.johnsnowlabs.nlp:spark-nlp-m1_2.12:4.2.8

```

Expand All @@ -383,7 +383,7 @@ The `spark-nlp-m1` has been published to the [Maven Repository](https://mvnrepos
spark-shell \
--driver-memory 16g \
--conf spark.kryoserializer.buffer.max=2000M \
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8
```

## Scala
Expand All @@ -399,7 +399,7 @@ Spark NLP supports Scala 2.12.15 if you are using Apache Spark 3.0.x, 3.1.x, 3.2
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp_2.12</artifactId>
<version>4.2.7</version>
<version>4.2.8</version>
</dependency>
```

Expand All @@ -410,7 +410,7 @@ Spark NLP supports Scala 2.12.15 if you are using Apache Spark 3.0.x, 3.1.x, 3.2
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-gpu_2.12</artifactId>
<version>4.2.7</version>
<version>4.2.8</version>
</dependency>
```

Expand All @@ -421,7 +421,7 @@ Spark NLP supports Scala 2.12.15 if you are using Apache Spark 3.0.x, 3.1.x, 3.2
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-aarch64_2.12</artifactId>
<version>4.2.7</version>
<version>4.2.8</version>
</dependency>
```

Expand All @@ -432,7 +432,7 @@ Spark NLP supports Scala 2.12.15 if you are using Apache Spark 3.0.x, 3.1.x, 3.2
<dependency>
<groupId>com.johnsnowlabs.nlp</groupId>
<artifactId>spark-nlp-m1_2.12</artifactId>
<version>4.2.7</version>
<version>4.2.8</version>
</dependency>
```

Expand All @@ -442,28 +442,28 @@ Spark NLP supports Scala 2.12.15 if you are using Apache Spark 3.0.x, 3.1.x, 3.2

```sbtshell
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % "4.2.7"
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp" % "4.2.8"
```

**spark-nlp-gpu:**

```sbtshell
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-gpu
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-gpu" % "4.2.7"
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-gpu" % "4.2.8"
```

**spark-nlp-aarch64:**

```sbtshell
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-aarch64
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-aarch64" % "4.2.7"
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-aarch64" % "4.2.8"
```

**spark-nlp-m1:**

```sbtshell
// https://mvnrepository.com/artifact/com.johnsnowlabs.nlp/spark-nlp-m1
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-m1" % "4.2.7"
libraryDependencies += "com.johnsnowlabs.nlp" %% "spark-nlp-m1" % "4.2.8"
```

Maven Central: [https://mvnrepository.com/artifact/com.johnsnowlabs.nlp](https://mvnrepository.com/artifact/com.johnsnowlabs.nlp)
Expand All @@ -483,7 +483,7 @@ If you installed pyspark through pip/conda, you can install `spark-nlp` through
Pip:

```bash
pip install spark-nlp==4.2.7
pip install spark-nlp==4.2.8
```

Conda:
Expand Down Expand Up @@ -511,7 +511,7 @@ spark = SparkSession.builder \
.config("spark.driver.memory","16G")\
.config("spark.driver.maxResultSize", "0") \
.config("spark.kryoserializer.buffer.max", "2000M")\
.config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7")\
.config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8")\
.getOrCreate()
```

Expand Down Expand Up @@ -579,7 +579,7 @@ Use either one of the following options
- Add the following Maven Coordinates to the interpreter's library list

```bash
com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7
com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8
```

- Add a path to pre-built jar from [here](#compiled-jars) in the interpreter's library list making sure the jar is available to driver path
Expand All @@ -589,7 +589,7 @@ com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7
Apart from the previous step, install the python module through pip

```bash
pip install spark-nlp==4.2.7
pip install spark-nlp==4.2.8
```

Or you can install `spark-nlp` from inside Zeppelin by using Conda:
Expand All @@ -614,7 +614,7 @@ The easiest way to get this done on Linux and macOS is to simply install `spark-
$ conda create -n sparknlp python=3.8 -y
$ conda activate sparknlp
# spark-nlp by default is based on pyspark 3.x
$ pip install spark-nlp==4.2.7 pyspark==3.2.3 jupyter
$ pip install spark-nlp==4.2.8 pyspark==3.2.3 jupyter
$ jupyter notebook
```

Expand All @@ -630,7 +630,7 @@ export PYSPARK_PYTHON=python3
export PYSPARK_DRIVER_PYTHON=jupyter
export PYSPARK_DRIVER_PYTHON_OPTS=notebook

pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7
pyspark --packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8
```

Alternatively, you can mix in using `--jars` option for pyspark + `pip install spark-nlp`
Expand All @@ -655,7 +655,7 @@ This script comes with the two options to define `pyspark` and `spark-nlp` versi
# -s is for spark-nlp
# -g will enable upgrading libcudnn8 to 8.1.0 on Google Colab for GPU usage
# by default they are set to the latest
!wget https://setup.johnsnowlabs.com/colab.sh -O - | bash /dev/stdin -p 3.2.3 -s 4.2.7
!wget https://setup.johnsnowlabs.com/colab.sh -O - | bash /dev/stdin -p 3.2.3 -s 4.2.8
```

[Spark NLP quick start on Google Colab](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/quick_start_google_colab.ipynb) is a live demo on Google Colab that performs named entity recognitions and sentiment analysis by using Spark NLP pretrained pipelines.
Expand All @@ -676,7 +676,7 @@ This script comes with the two options to define `pyspark` and `spark-nlp` versi
# -s is for spark-nlp
# -g will enable upgrading libcudnn8 to 8.1.0 on Kaggle for GPU usage
# by default they are set to the latest
!wget https://setup.johnsnowlabs.com/colab.sh -O - | bash /dev/stdin -p 3.2.3 -s 4.2.7
!wget https://setup.johnsnowlabs.com/colab.sh -O - | bash /dev/stdin -p 3.2.3 -s 4.2.8
```

[Spark NLP quick start on Kaggle Kernel](https://www.kaggle.com/mozzie/spark-nlp-named-entity-recognition) is a live demo on Kaggle Kernel that performs named entity recognitions by using Spark NLP pretrained pipeline.
Expand All @@ -694,9 +694,9 @@ This script comes with the two options to define `pyspark` and `spark-nlp` versi

3. In `Libraries` tab inside your cluster you need to follow these steps:

3.1. Install New -> PyPI -> `spark-nlp==4.2.7` -> Install
3.1. Install New -> PyPI -> `spark-nlp==4.2.8` -> Install

3.2. Install New -> Maven -> Coordinates -> `com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7` -> Install
3.2. Install New -> Maven -> Coordinates -> `com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8` -> Install

4. Now you can attach your notebook to the cluster and use Spark NLP!

Expand Down Expand Up @@ -744,7 +744,7 @@ A sample of your software configuration in JSON on S3 (must be public access):
"spark.kryoserializer.buffer.max": "2000M",
"spark.serializer": "org.apache.spark.serializer.KryoSerializer",
"spark.driver.maxResultSize": "0",
"spark.jars.packages": "com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7"
"spark.jars.packages": "com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8"
}
}]
```
Expand All @@ -753,7 +753,7 @@ A sample of AWS CLI to launch EMR cluster:

```.sh
aws emr create-cluster \
--name "Spark NLP 4.2.7" \
--name "Spark NLP 4.2.8" \
--release-label emr-6.2.0 \
--applications Name=Hadoop Name=Spark Name=Hive \
--instance-type m4.4xlarge \
Expand Down Expand Up @@ -817,7 +817,7 @@ gcloud dataproc clusters create ${CLUSTER_NAME} \
--enable-component-gateway \
--metadata 'PIP_PACKAGES=spark-nlp spark-nlp-display google-cloud-bigquery google-cloud-storage' \
--initialization-actions gs://goog-dataproc-initialization-actions-${REGION}/python/pip-install.sh \
--properties spark:spark.serializer=org.apache.spark.serializer.KryoSerializer,spark:spark.driver.maxResultSize=0,spark:spark.kryoserializer.buffer.max=2000M,spark:spark.jars.packages=com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7
--properties spark:spark.serializer=org.apache.spark.serializer.KryoSerializer,spark:spark.driver.maxResultSize=0,spark:spark.kryoserializer.buffer.max=2000M,spark:spark.jars.packages=com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8
```

2. On an existing one, you need to install spark-nlp and spark-nlp-display packages from PyPI.
Expand Down Expand Up @@ -856,7 +856,7 @@ spark = SparkSession.builder \
.config("spark.kryoserializer.buffer.max", "2000m") \
.config("spark.jsl.settings.pretrained.cache_folder", "sample_data/pretrained") \
.config("spark.jsl.settings.storage.cluster_tmp_dir", "sample_data/storage") \
.config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7") \
.config("spark.jars.packages", "com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8") \
.getOrCreate()
```

Expand All @@ -870,7 +870,7 @@ spark-shell \
--conf spark.kryoserializer.buffer.max=2000M \
--conf spark.jsl.settings.pretrained.cache_folder="sample_data/pretrained" \
--conf spark.jsl.settings.storage.cluster_tmp_dir="sample_data/storage" \
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8
```

**pyspark:**
Expand All @@ -883,7 +883,7 @@ pyspark \
--conf spark.kryoserializer.buffer.max=2000M \
--conf spark.jsl.settings.pretrained.cache_folder="sample_data/pretrained" \
--conf spark.jsl.settings.storage.cluster_tmp_dir="sample_data/storage" \
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.7
--packages com.johnsnowlabs.nlp:spark-nlp_2.12:4.2.8
```

**Databricks:**
Expand Down Expand Up @@ -1147,12 +1147,12 @@ spark = SparkSession.builder \
.config("spark.driver.memory","16G")\
.config("spark.driver.maxResultSize", "0") \
.config("spark.kryoserializer.buffer.max", "2000M")\
.config("spark.jars", "/tmp/spark-nlp-assembly-4.2.7.jar")\
.config("spark.jars", "/tmp/spark-nlp-assembly-4.2.8.jar")\
.getOrCreate()
```

- You can download provided Fat JARs from each [release notes](https://github.com/JohnSnowLabs/spark-nlp/releases), please pay attention to pick the one that suits your environment depending on the device (CPU/GPU) and Apache Spark version (3.0.x, 3.1.x, 3.2.x, and 3.3.x)
- If you are local, you can load the Fat JAR from your local FileSystem, however, if you are in a cluster setup you need to put the Fat JAR on a distributed FileSystem such as HDFS, DBFS, S3, etc. (i.e., `hdfs:///tmp/spark-nlp-assembly-4.2.7.jar`)
- If you are local, you can load the Fat JAR from your local FileSystem, however, if you are in a cluster setup you need to put the Fat JAR on a distributed FileSystem such as HDFS, DBFS, S3, etc. (i.e., `hdfs:///tmp/spark-nlp-assembly-4.2.8.jar`)

Example of using pretrained Models and Pipelines in offline:

Expand Down
2 changes: 1 addition & 1 deletion build.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ name := getPackageName(is_m1, is_gpu, is_aarch64)

organization := "com.johnsnowlabs.nlp"

version := "4.2.7"
version := "4.2.8"

(ThisBuild / scalaVersion) := scalaVer

Expand Down
8 changes: 4 additions & 4 deletions conda/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
package:
name: "spark-nlp"
version: 4.2.7
version: 4.2.8

app:
entry: spark-nlp
summary: Natural Language Understanding Library for Apache Spark.

source:
fn: spark-nlp-4.2.7.tar.gz
url: https://files.pythonhosted.org/packages/1d/e0/c123346f12e9d312c0b6bfecbd96db9e899882e01bc1adb338349d9e1088/spark-nlp-4.2.7.tar.gz
sha256: 071f5b06ae10319cffe5a4fa22586a5b269800578e8a74de912abf123fd01bdf
fn: spark-nlp-4.2.8.tar.gz
url: https://files.pythonhosted.org/packages/5a/af/9c73a6a6a74f2848209001194bef19b74cfe04fdd070aec529d290ce239d/spark-nlp-4.2.8.tar.gz
sha256: 0573d006538808fd46a102f7efc79c6a7a37d68800e1b2cbf0607d0128a724f1
build:
noarch: generic
number: 0
Expand Down
2 changes: 1 addition & 1 deletion docs/_includes/docs-healthcare-pagination.html
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
</li>
</ul>
<ul class="pagination owl-carousel pagination_big">
<li><a href="release_notes_4_2_7">4.2.7</a></li>
<li><a href="release_notes_4_2_8">4.2.8</a></li>
<li><a href="release_notes_4_2_4">4.2.4</a></li>
<li><a href="release_notes_4_2_3">4.2.3</a></li>
<li><a href="release_notes_4_2_2">4.2.2</a></li>
Expand Down
9 changes: 0 additions & 9 deletions docs/_includes/input_output_image.html

This file was deleted.

Loading