Skip to content

Commit

Permalink
docs: Fixing a bunch of spelling errors, adding CI check
Browse files Browse the repository at this point in the history
  • Loading branch information
jbohren committed Apr 14, 2016
1 parent 0c66a2f commit 8547bbf
Show file tree
Hide file tree
Showing 16 changed files with 72 additions and 47 deletions.
2 changes: 1 addition & 1 deletion .travis.before_install.bash
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/usr/bin/env bash

if [ "$TRAVIS_OS_NAME" == "linux" ]; then
echo "No Linux-specific before_install steps."
sudo apt-get install enchant
elif [ "$TRAVIS_OS_NAME" == "osx" ]; then
if [ "$PYTHON" == "/usr/local/bin/python3" ]; then
brew install python3
Expand Down
3 changes: 2 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ matrix:
before_install:
# Install catkin_tools dependencies
- source .travis.before_install.bash
- pip install setuptools argparse catkin-pkg distribute PyYAML psutil trollius osrf_pycommon
- pip install setuptools argparse catkin-pkg distribute PyYAML psutil trollius osrf_pycommon pyenchant sphinxcontrib-spelling
install:
# Install catkin_tools
- python setup.py develop
Expand All @@ -55,6 +55,7 @@ script:
# Build documentation
- pushd docs
- make html
- sphinx-build -b spelling . build
- popd
notifications:
email: false
4 changes: 2 additions & 2 deletions docs/advanced/catkin_shell_verbs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ When you source the resulting file, you can use ``bash``/``zsh`` shell functions
Provided verbs are:

- ``catkin cd`` -- Change to package directory in source space.
- ``catkin source`` -- Source the develspace or installspace of the containing workspace.
- ``catkin source`` -- Source the devel space or install space of the containing workspace.

Full Command-Line Interface
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand All @@ -24,7 +24,7 @@ Change to package directory in source space with `cd` verb.
ARGS are any valid catkin locate arguments
The `source` verb sources the develspace or installspace of the containing workspace.
The `source` verb sources the devel space or install space of the containing workspace.

.. code-block:: text
Expand Down
32 changes: 16 additions & 16 deletions docs/advanced/job_executor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ The Catkin Execution Engine
===========================

One of the core modules in ``catkin_tools`` is the **job executor**.
The executor performs jobs required to complete a task in a way that maximizes (or achives a specific) resource utilization subject to job dependency constraints.
The executor performs jobs required to complete a task in a way that maximizes (or achieves a specific) resource utilization subject to job dependency constraints.
The executor is closely integrated with logging and job output capture.
This page details the design and implementation of the executor.

Expand All @@ -13,30 +13,30 @@ The execution model is fairly simple.
The executor executes a single **task** for a given command (i.e.
``build``, ``clean``, etc.).
A **task** is a set of **jobs** which are related by an acyclic dependency graph.
Each **job** is given a unique identifier and is composed of a set of dependencies and a sequence of executable **stages**, which are arbitrary functions or subprocess calls which utilize one or more **workers** to be executed.
Each **job** is given a unique identifier and is composed of a set of dependencies and a sequence of executable **stages**, which are arbitrary functions or sub-process calls which utilize one or more **workers** to be executed.
The allocation of workers is managed by the **job server**.
Throughout execution, synchronization with the user-facing interface and output formatting are mediated by a simple **event queue**.

The executor is single-threaded and uses an asynchronous loop to execute jobs as futures.
If a job contains blocking stages it can utilize a normal thread pool for execution, but is still only guaranteed one worker by the main loop of the executor.
See the following section for more information on workers and the job server.

The input to the executor is a list of topologically-sorted jobs with no circular dependencies and some parameters which control the jobserver behavior.
The input to the executor is a list of topologically-sorted jobs with no circular dependencies and some parameters which control the job server behavior.
These behavior parameters are explained in detail in the following section.

Each job is in one of the following lifecycle states at any time:
Each job is in one of the following life-cycle states at any time:

- ``PENDING`` Not ready to be executed (dependencies not yet completed)
- ``QUEUED`` Ready to be executed once workers are available
- ``ACTIVE`` Being executed by one or more workers
- ``FINISHED`` Has been executed and either succeded or failed (terminal)
- ``FINISHED`` Has been executed and either succeeded or failed (terminal)
- ``ABANDONED`` Was not built because a prerequisite was not met (terminal)

.. figure:: executor_job_lifecycle.svg
:scale: 50 %
:alt: Executor Job Lifecycle
:alt: Executor Job Life-cycle

**Executor Job lifecycle**
**Executor Job Life-cycle**

All jobs begin in the ``PENDING`` state, and any jobs with unsatisfiable dependencies are immediately set to ``ABANDONED``, and any jobs without dependencies are immediately set to ``QUEUED``.
After the state initialization, the executor processes jobs in a main loop until they are in one of the two terminal states (``FINISHED`` or ``ABANDONED``).
Expand All @@ -46,7 +46,7 @@ Each main loop iteration does the following:
- Report status of all jobs to the event queue
- Retrieve ``ACTIVE`` job futures which have completed and set them ``FINISHED``
- Check for any ``PENDING`` jobs which need to be ``ABANDONED`` due to failed jobs
- Change all ``PENDING`` jobs whose dependencies are satisifed to ``QUEUED``
- Change all ``PENDING`` jobs whose dependencies are satisfied to ``QUEUED``

Once each job is in one of terminal states, the executor pushes a final status event and returns.

Expand All @@ -59,15 +59,15 @@ Once a job is started, it is assigned a single worker from the job server.
These are considered **top-level jobs** since they are managed directly by the catkin executor.
The number of top-level jobs can be configured for a given task.

Additionally to top-level paralellism, some job stages are capable of running in parallel, themselves.
Additionally to top-level parallelism, some job stages are capable of running in parallel, themselves.
In such cases, the job server can interface directly with the underlying stage's low-level job allocation.
This enables multi-level parallelism without allocating more than a fixed number of jobs.

.. figure:: executor_job_resources.svg
:scale: 50 %
:alt: Executor job resources

**Executor Job Flow and Resource Utilization** -- In this snapshot of the job pipeline, the executor is executing four of six possible top-level jobs, each with three stages, and using sevel of eight total workers. Two jobs are executing subprocesses, which have side-channel communication with the job server.
**Executor Job Flow and Resource Utilization** -- In this snapshot of the job pipeline, the executor is executing four of six possible top-level jobs, each with three stages, and using seven of eight total workers. Two jobs are executing sub-processes, which have side-channel communication with the job server.

One such parallel-capable stage is the GNU Make build stage.
In this case, the job server implements a GNU Make job server interface, which involves reading and writing tokens from file handles passed as build flags to the Make command.
Expand All @@ -91,21 +91,21 @@ Jobs and Job Stages
^^^^^^^^^^^^^^^^^^^

As mentioned above, a **job** is a set of dependencies and a sequence of **job stages**.
Jobs and stages are constructed before a given task starts executing, and hold only specificaitons of what needs to be done to complete them.
Jobs and stages are constructed before a given task starts executing, and hold only specifications of what needs to be done to complete them.
All stages are given a label for user introspection, a logger interface, and can either require or not require allocation of a worker from the job server.

Stage execution is performed asynchronously by Python's ``asyncio`` module.
This means that exceptions thrown in job stages are handled directly by the executor.
It also means job stages can be interrupted easily through Python's normal signal handling mechanism.

Stages can either be **command stages** (subprocess commands) or **function stages** (python functions).
Stages can either be **command stages** (sub-process commands) or **function stages** (python functions).
In either case, loggers used by stages support segmentation of ``stdout`` and ``stderr`` from job stages for both real-time introspection and logging.


Command Stages
~~~~~~~~~~~~~~~

In addition to the basic arguments mentioned above, command stages are paramterized by the standard subprocess command arguments including the following:
In addition to the basic arguments mentioned above, command stages are paramterized by the standard sub-process command arguments including the following:

- The command, itself, and its arguments,
- The working directory for the command,
Expand All @@ -114,15 +114,15 @@ In addition to the basic arguments mentioned above, command stages are paramteri
- Whether to emulate a TTY
- Whether to partition ``stdout`` and ``stderr``

When executed, command stages use ``asncio``'s asynchronous process executor with a custom I/O protocol.
When executed, command stages use ``asyncio``'s asynchronous process executor with a custom I/O protocol.

Function Stages
~~~~~~~~~~~~~~~

In addition to the basic arguments mentioned above, function stages are parameterized by a function handle and a set of function-specific Python arguments and keyword arguments.
When executed, they use the thread pool mentioned above.

Since the function stages aren't subprocesses, I/O isn't piped or redirected.
Since the function stages aren't sub-processes, I/O isn't piped or redirected.
Instead, a custom I/O logger is passed to the function for output.
Functions used as function stages should use this logger to write to ``stdout`` and ``stderr`` instead of using normal system calls.

Expand Down Expand Up @@ -153,5 +153,5 @@ The modeled events include the following:
- ``STAGE_PROGRESS`` A job stage has executed partially,
- ``STDOUT`` A status message from a job,
- ``STDERR`` A warning or error message from a job,
- ``SUBPROCESS`` A subprocess has been created,
- ``SUBPROCESS`` A sub process has been created,
- ``MESSAGE`` Arbitrary string message
2 changes: 1 addition & 1 deletion docs/advanced/verb_customization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Below are the built-in aliases as displayed by this command:
Defining Additional Aliases
^^^^^^^^^^^^^^^^^^^^^^^^^^^

Verb aliases are defined in the ``verb_aliases`` subdirectory of the catkin config folder, ``~/.config/catkin/verb_aliases``.
Verb aliases are defined in the ``verb_aliases`` sub-directory of the catkin config folder, ``~/.config/catkin/verb_aliases``.
Any YAML files in that folder (files with a ``.yaml`` extension) will be processed as definition files.

These files are formatted as simple YAML dictionaries which map aliases to expanded expressions, which must be composed of other ``catkin`` verbs, options, or aliases:
Expand Down
1 change: 1 addition & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinx.ext.viewcode',
'sphinxcontrib.spelling',
#'sphinxcontrib.programoutput',
#'sphinxcontrib.ansi',
]
Expand Down
2 changes: 1 addition & 1 deletion docs/development/adding_build_types.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ The current release of ``catkin_tools`` supports building two types of packages:
In order to fully support additional build types, numerous additions need to be made to the command-line interfaces so that the necessary parameters can be passed to the ``build`` verb.
For partial support, however, all that's needded is to add a build type identifier and a function for generating build jobs.

The supported build typs are easily extendable using the ``setuptools`` ``entry_points`` interface without modifying the ``catkin_tools`` project, itself.
The supported build types are easily extendable using the ``setuptools`` ``entry_points`` interface without modifying the ``catkin_tools`` project, itself.
Regardless of what package the ``entry_point`` is defined in, it will be defined in the ``setup.py`` of that package, and will take this form:

.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion docs/history.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ These defaults would result in the execution of the following commands:
$ cmake ../src -DCATKIN_DEVEL_SPACE=../devel -DCMAKE_INSTALL_PREFIX=../install
$ make -j<number of cores> -l<number of cores> [optional target, e.g. install]
An advantage of this approach is that the total configuration would be smaller than configuring each package individually and that the Make targets can be parallelized even amongst dependent packages.
An advantage of this approach is that the total configuration would be smaller than configuring each package individually and that the Make targets can be parallelized even among dependent packages.

In practice, however, it also means that in large workspaces, modification of the CMakeLists.txt of one package would necessitate the reconfiguration of all packages in the entire workspace.

Expand Down
6 changes: 3 additions & 3 deletions docs/migration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,10 +38,10 @@ Since all packages are built in isolation with ``catkin build``, you can't rely
Migration Troubleshooting
^^^^^^^^^^^^^^^^^^^^^^^^^

When migrating from ``catkin_make`` to catkin build, the most common problems come from Catkin packages taking advantge of package cross-talk in the CMake configuration stage.
When migrating from ``catkin_make`` to catkin build, the most common problems come from Catkin packages taking advantage of package cross-talk in the CMake configuration stage.

Many Catkin packages implicitly rely on other packages in a workspace to declare and find dependencies.
When switcing from ``catkin_make``, users will often discover these bugs.
When switching from ``catkin_make``, users will often discover these bugs.

Common Issues
-------------
Expand Down Expand Up @@ -200,7 +200,7 @@ CLI Comparison with ``catkin_make`` and ``catkin_make_isolated``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Below are tables mapping ``catkin_make`` and ``catkin_make_isolated`` arguments into ``catkin`` arguments.
Note that some ``catkin_make`` options can only be achived with the ``catkin config`` verb.
Note that some ``catkin_make`` options can only be achieved with the ``catkin config`` verb.

================================================= ============================================
catkin_make ... catkin ...
Expand Down
23 changes: 23 additions & 0 deletions docs/spelling_wordlist.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
CMake
whitelist
whitelisted
Whitelisting
Quickstart
workflow
devel
env
metadata
buildtools
config
devel
deinitialize
dependants
args
extendable
preprocess
autotools
prebuild
internet
buildtool
logfile
unsetting
6 changes: 3 additions & 3 deletions docs/troubleshooting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,14 @@ The ``catkin`` tool will detect the following issues automatically.
Missing Workspace Components
----------------------------

- Uninitialized workspace (mising ``.catkin_tools`` directory)
- Uninitialized workspace (missing ``.catkin_tools`` directory)
- Missing **source space** as specified by the configuration

Inconsistent Environment
------------------------

- The ``CMAKE_PREFIX_PATH`` environment variable is different than the cahced ``CMAKE_PREFIX_PATH``
- The explicitly extended workspace path yeilds a different ``CMAKE_PREFIX_PATH`` than the cached ``CMAKE_PREFIX_PATH``
- The ``CMAKE_PREFIX_PATH`` environment variable is different than the cached ``CMAKE_PREFIX_PATH``
- The explicitly extended workspace path yields a different ``CMAKE_PREFIX_PATH`` than the cached ``CMAKE_PREFIX_PATH``
- The **build space** or **devel space** was built with a different tool such as ``catkin_make`` or ``catkin_make_isolated``
- The **build space** or **devel space** was built in a different isolation mode

Expand Down
16 changes: 8 additions & 8 deletions docs/verbs/catkin_build.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ This is sometimes required when running ``catkin build`` from within a program t
Console Messages
----------------

Normally, unless an error occurs, the output from each package's build proces is collected but not printed to the console.
Normally, unless an error occurs, the output from each package's build process is collected but not printed to the console.
All that is printed is a pair of messages designating the start and end of a package's build.
This is formatted like the following for the ``genmsg`` package:

Expand Down Expand Up @@ -140,7 +140,7 @@ Build Summary
-------------

At the end of each build, a brief build summary is printed to guarantee that anomalies aren't missed.
This summary displays the total runtime, the number of successful jobs, the number of jobs which produced warnings, and the number of jobs which weren't attempted due to failed dependencies.
This summary displays the total run-time, the number of successful jobs, the number of jobs which produced warnings, and the number of jobs which weren't attempted due to failed dependencies.

.. code-block:: none
Expand Down Expand Up @@ -201,7 +201,7 @@ Skipping Packages

Suppose you built every package up to ``roslib``, but that package had a build error.
After fixing the error, you could run the same build command again, but the ``build`` verb provides an option to save time in this situation.
If re-started from the beginning, none of the products of the dependencies of ``roslib`` would be re-built, but it would still take some time for the underlying byuildsystem to verify that for each package.
If re-started from the beginning, none of the products of the dependencies of ``roslib`` would be re-built, but it would still take some time for the underlying build system to verify that for each package.

Those checks could be skipped, however, by jumping directly to a given package.
You could use the ``--start-with`` option to continue the build where you left off after fixing the problem.
Expand Down Expand Up @@ -291,7 +291,7 @@ Advanced Options
Temporarily Changing Build Flags
--------------------------------

While the build configuratoin flags are set and stored in the build context, it's possible to temporarily override or augment them when using the ``build`` verb.
While the build configuration flags are set and stored in the build context, it's possible to temporarily override or augment them when using the ``build`` verb.

.. code-block:: bash
Expand All @@ -313,13 +313,13 @@ This command passes the ``-DCMAKE_C_FLAGS=...`` arugment to all invocations of `
Configuring Build Jobs
----------------------

By default ``catkin build`` on a computer with ``N`` cores will build up to ``N`` packages in parallel and will distribute ``N`` ``make`` jobs among them using an internal jobserver.
If your platform doesn't support jobserver scheduling, ``catkin build`` will pass ``-jN -lN`` to ``make`` for each package.
By default ``catkin build`` on a computer with ``N`` cores will build up to ``N`` packages in parallel and will distribute ``N`` ``make`` jobs among them using an internal job server.
If your platform doesn't support job server scheduling, ``catkin build`` will pass ``-jN -lN`` to ``make`` for each package.

You can control the maximum number of packages allowed to build in parallel by using the ``-p`` or ``--parallel-packages`` option and you can change the number of ``make`` jobs available with the ``-j`` or ``--jobs`` option.

By default, these jobs options aren't passed to the underlying ``make`` command.
To disable the jobserver, you can use the ``--no-jobserver`` option, and you can pass flags directly to ``make`` with the ``--make-args`` option.
To disable the job server, you can use the ``--no-jobserver`` option, and you can pass flags directly to ``make`` with the ``--make-args`` option.

.. note::

Expand All @@ -342,7 +342,7 @@ For example, to specify that ``catkin build`` should not start additional parall
$ catkin build --mem-limit 50%
Alternatively, if it sohuld not start additional jobs when over 4GB of memory is used, you can specifiy:
Alternatively, if it should not start additional jobs when over 4GB of memory is used, you can specify:

.. code-block:: bash
Expand Down
Loading

0 comments on commit 8547bbf

Please sign in to comment.