Skip to content

Commit

Permalink
Merge pull request #3113 from mozilla/generate-package-cpp
Browse files Browse the repository at this point in the history
Rewrite generate_package.py in C++ to avoid training dependencies
  • Loading branch information
reuben authored Jul 2, 2020
2 parents 24526aa + 65915c7 commit d2d46c3
Show file tree
Hide file tree
Showing 40 changed files with 549 additions and 506 deletions.
4 changes: 2 additions & 2 deletions data/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@ Language-Specific Data

This directory contains language-specific data files. Most importantly, you will find here:

1. A list of unique characters for the target language (e.g. English) in `data/alphabet.txt`
1. A list of unique characters for the target language (e.g. English) in ``data/alphabet.txt``. After installing the training code, you can check ``python -m deepspeech_training.util.check_characters --help`` for a tool that creates an alphabet file from a list of training CSV files.

2. A scorer package (`data/lm/kenlm.scorer`) generated with `data/lm/generate_package.py`. The scorer package includes a binary n-gram language model generated with `data/lm/generate_lm.py`.
2. A scorer package (``data/lm/kenlm.scorer``) generated with ``generate_scorer_package`` (``native_client/generate_scorer_package.cpp``). The scorer package includes a binary n-gram language model generated with ``data/lm/generate_lm.py``.

For more information on how to build these resources from scratch, see the ``External scorer scripts`` section on `deepspeech.readthedocs.io <https://deepspeech.readthedocs.io/>`_.

157 changes: 0 additions & 157 deletions data/lm/generate_package.py

This file was deleted.

6 changes: 4 additions & 2 deletions doc/Decoder.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,11 @@ At decoding time, the scorer is queried every time a Unicode codepoint is predic

**Acoustic models trained with ``--utf8`` MUST NOT be used with an alphabet based scorer. Conversely, acoustic models trained with an alphabet file MUST NOT be used with a UTF-8 scorer.**

UTF-8 scorers can be built by using an input corpus with space separated codepoints. If your corpus only contains single codepoints separated by spaces, ``data/lm/generate_package.py`` should automatically enable UTF-8 mode, and it should print the message "Looks like a character based model."
UTF-8 scorers can be built by using an input corpus with space separated codepoints. If your corpus only contains single codepoints separated by spaces, ``generate_scorer_package`` should automatically enable UTF-8 mode, and it should print the message "Looks like a character based model."

If the message "Doesn't look like a character based model." is printed, you should double check your inputs to make sure it only contains single codepoints separated by spaces. UTF-8 mode can be forced by specifying the ``--force_utf8`` flag when running ``data/lm/generate_package.py``, but it is NOT RECOMMENDED.
If the message "Doesn't look like a character based model." is printed, you should double check your inputs to make sure it only contains single codepoints separated by spaces. UTF-8 mode can be forced by specifying the ``--force_utf8`` flag when running ``generate_scorer_package``, but it is NOT RECOMMENDED.

See :ref:`scorer-scripts` for more details on using ``generate_scorer_package``.

Because KenLM uses spaces as a word separator, the resulting language model will not include space characters in it. If you wish to use UTF-8 mode but still model spaces, you need to replace spaces in the input corpus with a different character **before** converting it to space separated codepoints. For example:

Expand Down
16 changes: 10 additions & 6 deletions doc/Scorer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,9 @@ External scorer scripts

DeepSpeech pre-trained models include an external scorer. This document explains how to reproduce our external scorer, as well as adapt the scripts to create your own.

The scorer is composed of two sub-components, a KenLM language model and a trie data structure containing all words in the vocabulary. In order to create the scorer package, first we must create a KenLM language model (using ``data/lm/generate_lm.py``, and then use ``data/lm/generate_package.py`` to create the final package file including the trie data structure.
The scorer is composed of two sub-components, a KenLM language model and a trie data structure containing all words in the vocabulary. In order to create the scorer package, first we must create a KenLM language model (using ``data/lm/generate_lm.py``, and then use ``generate_scorer_package`` to create the final package file including the trie data structure.

The ``generate_scorer_package`` binary is part of the native client package that is included with official releases. You can find the appropriate archive for your platform in the `GitHub release downloads <https://github.com/mozilla/DeepSpeech/releases/latest>`_. The native client package is named ``native_client.{arch}.{config}.{plat}.tar.xz``, where ``{arch}`` is the architecture the binary was built for, for example ``amd64`` or ``arm64``, ``config`` is the build configuration, which for building decoder packages does not matter, and ``{plat}`` is the platform the binary was built-for, for example ``linux`` or ``osx``. If you wanted to run the ``generate_scorer_package`` binary on a Linux desktop, you would download ``native_client.amd64.cpu.linux.tar.xz``.

Reproducing our external scorer
-------------------------------
Expand Down Expand Up @@ -36,12 +38,15 @@ Else you have to build `KenLM <https://github.com/kpu/kenlm>`_ first and then pa
--binary_a_bits 255 --binary_q_bits 8 --binary_type trie
Afterwards you can use ``generate_package.py`` to generate the scorer package using the ``lm.binary`` and ``vocab-500000.txt`` files:
Afterwards you can use ``generate_scorer_package`` to generate the scorer package using the ``lm.binary`` and ``vocab-500000.txt`` files:

.. code-block:: bash
cd data/lm
python3 generate_package.py --alphabet ../alphabet.txt --lm lm.binary --vocab vocab-500000.txt \
# Download and extract appropriate native_client package:
curl -LO http://github.com/mozilla/DeepSpeech/releases/...
tar xvf native_client.*.tar.xz
./generate_scorer_package --alphabet ../alphabet.txt --lm lm.binary --vocab vocab-500000.txt \
--package kenlm.scorer --default_alpha 0.931289039105002 --default_beta 1.1834137581510284
Building your own scorer
Expand All @@ -51,7 +56,6 @@ Building your own scorer can be useful if you're using models in a narrow usage

The LibriSpeech LM training text used by our scorer is around 4GB uncompressed, which should give an idea of the size of a corpus needed for a reasonable language model for general speech recognition. For more constrained use cases with smaller vocabularies, you don't need as much data, but you should still try to gather as much as you can.

With a text corpus in hand, you can then re-use the ``generate_lm.py`` and ``generate_package.py`` scripts to create your own scorer that is compatible with DeepSpeech clients and language bindings. Before building the language model, you must first familiarize yourself with the `KenLM toolkit <https://kheafield.com/code/kenlm/>`_. Most of the options exposed by the ``generate_lm.py`` script are simply forwarded to KenLM options of the same name, so you must read the KenLM documentation in order to fully understand their behavior.
With a text corpus in hand, you can then re-use ``generate_lm.py`` and ``generate_scorer_package`` to create your own scorer that is compatible with DeepSpeech clients and language bindings. Before building the language model, you must first familiarize yourself with the `KenLM toolkit <https://kheafield.com/code/kenlm/>`_. Most of the options exposed by the ``generate_lm.py`` script are simply forwarded to KenLM options of the same name, so you must read the KenLM documentation in order to fully understand their behavior.

After using ``generate_lm.py`` to create a KenLM language model binary file, you can use ``generate_package.py`` to create a scorer package as described in the previous section. Note that we have a :github:`lm_optimizer.py script <lm_optimizer.py>` which can be used to find good default values for alpha and beta. To use it, you must first
generate a package with any value set for default alpha and beta flags. For this step, it doesn't matter what values you use, as they'll be overridden by ``lm_optimizer.py``. Then, use ``lm_optimizer.py`` with this scorer file to find good alpha and beta values. Finally, use ``generate_package.py`` again, this time with the new values.
After using ``generate_lm.py`` to create a KenLM language model binary file, you can use ``generate_scorer_package`` to create a scorer package as described in the previous section. Note that we have a :github:`lm_optimizer.py script <lm_optimizer.py>` which can be used to find good default values for alpha and beta. To use it, you must first generate a package with any value set for default alpha and beta flags. For this step, it doesn't matter what values you use, as they'll be overridden by ``lm_optimizer.py`` later. Then, use ``lm_optimizer.py`` with this scorer file to find good alpha and beta values. Finally, use ``generate_scorer_package`` again, this time with the new values.
43 changes: 34 additions & 9 deletions native_client/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

load("@org_tensorflow//tensorflow:tensorflow.bzl", "tf_cc_shared_object")
load("@local_config_cuda//cuda:build_defs.bzl", "if_cuda")
load("@com_github_nelhage_rules_boost//:boost/boost.bzl", "boost_deps")

load(
"@org_tensorflow//tensorflow/lite:build_def.bzl",
Expand Down Expand Up @@ -74,28 +75,36 @@ cc_library(
"ctcdecode/scorer.cpp",
"ctcdecode/path_trie.cpp",
"ctcdecode/path_trie.h",
"alphabet.cc",
] + OPENFST_SOURCES_PLATFORM,
hdrs = [
"ctcdecode/ctc_beam_search_decoder.h",
"ctcdecode/scorer.h",
"ctcdecode/decoder_utils.h",
"alphabet.h",
],
includes = [
".",
"ctcdecode/third_party/ThreadPool",
] + OPENFST_INCLUDES_PLATFORM,
deps = [":kenlm"]
deps = [":kenlm"],
linkopts = [
"-lm",
"-ldl",
"-pthread",
],
)

tf_cc_shared_object(
name = "libdeepspeech.so",
srcs = [
"deepspeech.cc",
"deepspeech.h",
"alphabet.h",
"modelstate.h",
"deepspeech_errors.cc",
"modelstate.cc",
"workspace_status.h",
"modelstate.h",
"workspace_status.cc",
"workspace_status.h",
] + select({
"//native_client:tflite": [
"tflitemodelstate.h",
Expand Down Expand Up @@ -185,6 +194,27 @@ genrule(
cmd = "dsymutil $(location :libdeepspeech.so) -o $@"
)

cc_binary(
name = "generate_scorer_package",
srcs = [
"generate_scorer_package.cpp",
"deepspeech_errors.cc",
],
copts = ["-std=c++11"],
deps = [
":decoder",
"@com_google_absl//absl/flags:flag",
"@com_google_absl//absl/flags:parse",
"@com_google_absl//absl/types:optional",
"@boost//:program_options",
],
linkopts = [
"-lm",
"-ldl",
"-pthread",
],
)

cc_binary(
name = "enumerate_kenlm_vocabulary",
srcs = [
Expand All @@ -201,10 +231,5 @@ cc_binary(
"trie_load.cc",
],
copts = ["-std=c++11"],
linkopts = [
"-lm",
"-ldl",
"-pthread",
],
deps = [":decoder"],
)
Loading

0 comments on commit d2d46c3

Please sign in to comment.