Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add .readthedocs.yml #173

Merged
merged 4 commits into from
Dec 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions .readthedocs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
version: 2
formats: all
sphinx:
configuration: docs/conf.py
fail_on_warning: true
build:
os: ubuntu-22.04
tools:
# For available versions, see:
# https://docs.readthedocs.io/en/stable/config-file/v2.html#build-tools-python
python: "3.10" # Keep in sync with .github/workflows/main.yml
python:
install:
- requirements: docs/requirements.txt
- path: .
2 changes: 1 addition & 1 deletion docs/client/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ For example, to run a new job for a given spider with custom parameters::


Getting job information
^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^

To select a specific job for a project, use ``.jobs.get(<jobKey>)``::

Expand Down
50 changes: 2 additions & 48 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,6 @@
import sys
from datetime import datetime

from docutils import nodes
from sphinx.util.docfields import TypedField
from sphinx import addnodes


sys.path.insert(0, os.path.abspath('..'))

Expand Down Expand Up @@ -76,7 +72,7 @@
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
language = "en"

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
Expand All @@ -94,8 +90,7 @@

# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'
html_theme = "sphinx_rtd_theme"

# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
Expand Down Expand Up @@ -171,44 +166,3 @@

html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]


# disable cross-reference for ivar
# patch taken from http://stackoverflow.com/a/41184353/1932023
def patched_make_field(self, types, domain, items, env=None):
# type: (List, unicode, Tuple) -> nodes.field
def handle_item(fieldarg, content):
par = nodes.paragraph()
par += addnodes.literal_strong('', fieldarg) # Patch: this line added
# par.extend(self.make_xrefs(self.rolename, domain, fieldarg,
# addnodes.literal_strong))
if fieldarg in types:
par += nodes.Text(' (')
# NOTE: using .pop() here to prevent a single type node to be
# inserted twice into the doctree, which leads to
# inconsistencies later when references are resolved
fieldtype = types.pop(fieldarg)
if len(fieldtype) == 1 and isinstance(fieldtype[0], nodes.Text):
typename = u''.join(n.astext() for n in fieldtype)
par.extend(self.make_xrefs(self.typerolename, domain, typename,
addnodes.literal_emphasis))
else:
par += fieldtype
par += nodes.Text(')')
par += nodes.Text(' -- ')
par += content
return par

fieldname = nodes.field_name('', self.label)
if len(items) == 1 and self.can_collapse:
fieldarg, content = items[0]
bodynode = handle_item(fieldarg, content)
else:
bodynode = self.list_type()
for fieldarg, content in items:
bodynode += nodes.list_item('', handle_item(fieldarg, content))
fieldbody = nodes.field_body('', bodynode)
return nodes.field('', fieldname, fieldbody)


TypedField.make_field = patched_make_field
Gallaecio marked this conversation as resolved.
Show resolved Hide resolved
3 changes: 0 additions & 3 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,6 @@
Client interface for Scrapinghub API
====================================

.. image:: https://secure.travis-ci.org/scrapinghub/python-scrapinghub.svg?branch=master
:target: https://travis-ci.org/scrapinghub/python-scrapinghub

The ``scrapinghub`` is a Python library for communicating with the `Scrapinghub API`_.

.. _Scrapinghub API: https://doc.scrapinghub.com/scrapy-cloud.html#scrapycloud
Expand Down
2 changes: 2 additions & 0 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
sphinx==7.2.6
sphinx-rtd-theme==2.0.0
2 changes: 1 addition & 1 deletion scrapinghub/client/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ class ScrapinghubClient(object):
If you need full access to *Scrapy Cloud* features, you'll need to
provide a Scrapinghub APIKEY through this argument or deploying ``SH_APIKEY``.
:param dash_endpoint: (optional) Scrapinghub Dash panel url.
:param \*\*kwargs: (optional) Additional arguments for
:param kwargs: (optional) Additional arguments for
:class:`~scrapinghub.hubstorage.HubstorageClient` constructor.

:ivar projects: projects collection,
Expand Down
6 changes: 3 additions & 3 deletions scrapinghub/client/collections.py
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ def get(self, key, **params):
"""Get item from collection by key.

:param key: string item key.
:param \*\*params: (optional) additional query params for the request.
:param params: (optional) additional query params for the request.
:return: an item dictionary if exists.
:rtype: :class:`dict`
"""
Expand Down Expand Up @@ -217,7 +217,7 @@ def iter(self, key=None, prefix=None, prefixcount=None, startts=None,
:param startts: UNIX timestamp at which to begin results.
:param endts: UNIX timestamp at which to end results.
:param requests_params: (optional) a dict with optional requests params.
:param \*\*params: (optional) additional query params for the request.
:param params: (optional) additional query params for the request.
:return: an iterator over items list.
:rtype: :class:`collections.abc.Iterable[dict]`
"""
Expand All @@ -243,7 +243,7 @@ def list(self, key=None, prefix=None, prefixcount=None, startts=None,
:param startts: UNIX timestamp at which to begin results.
:param endts: UNIX timestamp at which to end results.
:param requests_params: (optional) a dict with optional requests params.
:param \*\*params: (optional) additional query params for the request.
:param params: (optional) additional query params for the request.
:return: a list of items where each item is represented with a dict.
:rtype: :class:`list[dict]`
"""
Expand Down
8 changes: 4 additions & 4 deletions scrapinghub/client/frontiers.py
Original file line number Diff line number Diff line change
Expand Up @@ -319,7 +319,7 @@ def add(self, fps):
def iter(self, **params):
"""Iterate through fingerprints in the slot.

:param \*\*params: (optional) additional query params for the request.
:param params: (optional) additional query params for the request.
:return: an iterator over fingerprints.
:rtype: :class:`collections.abc.Iterable[str]`
"""
Expand All @@ -331,7 +331,7 @@ def iter(self, **params):
def list(self, **params):
"""List fingerprints in the slot.

:param \*\*params: (optional) additional query params for the request.
:param params: (optional) additional query params for the request.
:return: a list of fingerprints.
:rtype: :class:`list[str]`
"""
Expand All @@ -355,7 +355,7 @@ def iter(self, mincount=None, **params):
"""Iterate through batches in the queue.

:param mincount: (optional) limit results with min amount of requests.
:param \*\*params: (optional) additional query params for the request.
:param params: (optional) additional query params for the request.
:return: an iterator over request batches in the queue where each
batch is represented with a dict with ('id', 'requests') field.
:rtype: :class:`collections.abc.Iterable[dict]`
Expand All @@ -369,7 +369,7 @@ def list(self, mincount=None, **params):
"""List request batches in the queue.

:param mincount: (optional) limit results with min amount of requests.
:param \*\*params: (optional) additional query params for the request.
:param params: (optional) additional query params for the request.
:return: a list of request batches in the queue where each batch
is represented with a dict with ('id', 'requests') field.
:rtype: :class:`list[dict]`
Expand Down
20 changes: 10 additions & 10 deletions scrapinghub/client/jobs.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ def count(self, spider=None, state=None, has_tag=None, lacks_tag=None,
in milliseconds.
:param endts: (optional) UNIX timestamp at which to end results,
in milliseconds.
:param \*\*params: (optional) other filter params.
:param params: (optional) other filter params.

:return: jobs count.
:rtype: :class:`int`
Expand Down Expand Up @@ -156,7 +156,7 @@ def iter(self, count=None, start=None, spider=None, state=None,
in millisecons.
:param meta: (optional) request for additional fields, a single
field name or a list of field names to return.
:param \*\*params: (optional) other filter params.
:param params: (optional) other filter params.

:return: a generator object over a list of dictionaries of jobs summary
for a given filter params.
Expand Down Expand Up @@ -227,7 +227,7 @@ def list(self, count=None, start=None, spider=None, state=None,
in milliseconds.
:param meta: (optional) request for additional fields, a single
field name or a list of field names to return.
:param \*\*params: (optional) other filter params.
:param params: (optional) other filter params.

:return: list of dictionaries of jobs summary for a given filter params.
:rtype: :class:`list[dict]`
Expand Down Expand Up @@ -262,7 +262,7 @@ def run(self, spider=None, units=None, priority=None, meta=None,
:param job_settings: (optional) a dictionary with job settings.
:param cmd_args: (optional) a string with script command args.
:param environment: (option) a dictionary with custom environment
:param \*\*params: (optional) additional keyword args.
:param params: (optional) additional keyword args.

:return: a job instance, representing the scheduled job.
:rtype: :class:`Job`
Expand Down Expand Up @@ -334,7 +334,7 @@ def summary(self, state=None, spider=None, **params):
:param state: (optional) a string state to filter jobs.
:param spider: (optional) a spider name (not needed if instantiated
with :class:`~scrapinghub.client.spiders.Spider`).
:param \*\*params: (optional) additional keyword args.
:param params: (optional) additional keyword args.
:return: a list of dictionaries of jobs summary
for a given filter params grouped by job state.
:rtype: :class:`list[dict]`
Expand Down Expand Up @@ -362,7 +362,7 @@ def iter_last(self, start=None, start_after=None, count=None,
:param count: (optional)
:param spider: (optional) a spider name (not needed if instantiated
with :class:`~scrapinghub.client.spiders.Spider`).
:param \*\*params: (optional) additional keyword args.
:param params: (optional) additional keyword args.
:return: a generator object over a list of dictionaries of jobs summary
for a given filter params.
:rtype: :class:`types.GeneratorType[dict]`
Expand Down Expand Up @@ -512,7 +512,7 @@ def close_writers(self):
def start(self, **params):
"""Move job to running state.

:param \*\*params: (optional) keyword meta parameters to update.
:param params: (optional) keyword meta parameters to update.
:return: a previous string job state.
:rtype: :class:`str`

Expand All @@ -526,7 +526,7 @@ def start(self, **params):
def finish(self, **params):
"""Move running job to finished state.

:param \*\*params: (optional) keyword meta parameters to update.
:param params: (optional) keyword meta parameters to update.
:return: a previous string job state.
:rtype: :class:`str`

Expand All @@ -540,7 +540,7 @@ def finish(self, **params):
def delete(self, **params):
"""Mark finished job for deletion.

:param \*\*params: (optional) keyword meta parameters to update.
:param params: (optional) keyword meta parameters to update.
:return: a previous string job state.
:rtype: :class:`str`

Expand All @@ -555,7 +555,7 @@ def update(self, state, **params):
"""Update job state.

:param state: a new job state.
:param \*\*params: (optional) keyword meta parameters to update.
:param params: (optional) keyword meta parameters to update.
:return: a previous string job state.
:rtype: :class:`str`

Expand Down
2 changes: 1 addition & 1 deletion scrapinghub/client/logs.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def log(self, message, level=logging.INFO, ts=None, **other):
:param message: a string message.
:param level: (optional) logging level, default to INFO.
:param ts: (optional) UNIX timestamp in milliseconds.
:param \*\*other: other optional kwargs.
:param other: other optional kwargs.
"""
self._origin.log(message, level=level, ts=ts, **other)

Expand Down
7 changes: 7 additions & 0 deletions tox.ini
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,10 @@ deps =
msgpack: -r{toxinidir}/requirements.txt
pypy-msgpack: -r{toxinidir}/requirements-pypy.txt
commands = py.test --cov=scrapinghub --cov-report=xml {posargs: scrapinghub tests}

[testenv:docs]
changedir = docs
deps =
-r docs/requirements.txt
commands =
sphinx-build -W -b html . {envtmpdir}/html
Loading