Skip to content

Commit

Permalink
Merge pull request #133 from ZLLentz/daq-api
Browse files Browse the repository at this point in the history
FIX: for pcdsdaq changes
  • Loading branch information
ZLLentz authored Jun 27, 2018
2 parents 5e3aa36 + 2496fd5 commit 03e6075
Show file tree
Hide file tree
Showing 9 changed files with 105 additions and 138 deletions.
2 changes: 1 addition & 1 deletion conda-recipe/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ requirements:
- pyfiglet
- happi >=1.1.1
- pcdsdevices >=0.6.0
- pcdsdaq >=1.1.0
- pcdsdaq >1.2.0
- psdm_qs_cli >=0.2.0
- lightpath >=0.3.0
- elog
Expand Down
1 change: 0 additions & 1 deletion docs/source/load_parts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ Submodules used by :py:mod:`load_conf`
:toctree: generated
:nosignatures:

hutch_python.daq.get_daq_objs
hutch_python.happi.get_happi_objs
hutch_python.happi.get_lightpath
hutch_python.user_load.get_user_objs
Expand Down
7 changes: 7 additions & 0 deletions docs/source/releases.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,13 @@ Release History
Next Release
============

Features
--------
- Provide well-curated namespaces for ``bluesky`` plans. These are in the
shell as ``bp`` (bluesky plans) for normal plans, ``bps`` (bluesky plan
stubs) for plans that are not complete on their own, and ``bpp``
(bluesky plan preprocessors) for plans that modify other plans.

Bugfixes
---------
- Show a correct error message when there is an ``ImportError`` in an
Expand Down
109 changes: 52 additions & 57 deletions docs/source/tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"There are a few utilities if you don't see a grouping you like, `class_namespace`, `metadata_namespace` allow you to create these groupings quickly based on information about a device."
"There are a few utilities if you don't see a grouping you like, `class_namespace`, `tree_namespace` allow you to create these groupings quickly based on information about a device."
]
},
{
Expand Down Expand Up @@ -458,11 +458,18 @@
"metadata": {},
"source": [
"One of the major additions the `hutch_python` restructuring gives us is access to `bluesky` scanning capabilities. A full tutorial is available at http://nsls-ii.github.io/bluesky/tutorial. This very brief introduction assumes you know a little about how `bluesky` works in general. The major keys are:\n",
"\n",
"* The `RunEngine` object is responsible for executing all scans. \n",
"* Experimental procedures are described in `plans`, python generators which allow sophisticated flow control\n",
"\n",
"\n",
"The hutch-python environment should already a `RunEngine` instatiated for you to begin playing\n",
"The hutch-python environment should already have a `RunEngine` instatiated for you to begin playing.\n",
"It also has built-in `bluesky` `plans` that are tab-accessible in the following objects:\n",
"\n",
"* `bp`: full plans that are ready to use\n",
"* `bps`: partial plans that can be used as building blocks for larger plans\n",
"* `bpp`: wrappers that add functionality to existing plans\n",
"\n",
"\n",
"#### Note\n",
"In the examples below we run our scans with simulated hardware built-in to the `ophyd` library.\n"
Expand All @@ -481,7 +488,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The simplest plan in `bluesky` is `count`, a simple reading of a detector. A basic execution looks like this where we take 5 measurements from the detector. Take note what is actually happen here. We create an instance of `count` then we pass this into `RE()`. "
"The simplest plan in `bluesky` is `count`, a simple reading of a detector. A basic execution looks like this where we take 5 measurements from the detector. Take note what is actually happenning here. We create an instance of `count` then we pass this into `RE()`. "
]
},
{
Expand All @@ -498,9 +505,20 @@
"New stream: 'primary'\n",
"+-----------+------------+------------+\n",
"| seq_num | time | det |\n",
"+-----------+------------+------------+\n"
"+-----------+------------+------------+\n",
"| 1 | 09:18:37.9 | 1.000 |\n",
"| 2 | 09:18:38.1 | 1.000 |\n",
"| 3 | 09:18:38.1 | 1.000 |\n",
"| 4 | 09:18:38.1 | 1.000 |\n",
"| 5 | 09:18:38.1 | 1.000 |\n",
"+-----------+------------+------------+\n",
"generator count ['dda5fba6'] (scan num: 1)\n",
"\n",
"\n",
"\n"
]
},

{
"data": {
"application/javascript": [
Expand Down Expand Up @@ -1293,22 +1311,6 @@
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"| 1 | 09:18:37.9 | 1.000 |\n",
"| 2 | 09:18:38.1 | 1.000 |\n",
"| 3 | 09:18:38.1 | 1.000 |\n",
"| 4 | 09:18:38.1 | 1.000 |\n",
"| 5 | 09:18:38.1 | 1.000 |\n",
"+-----------+------------+------------+\n",
"generator count ['dda5fba6'] (scan num: 1)\n",
"\n",
"\n",
"\n"
]
},
{
"data": {
"text/plain": [
Expand All @@ -1321,7 +1323,7 @@
}
],
"source": [
"RE(plans.count([det], num=5))"
"RE(bp.count([det], num=5))"
]
},
{
Expand All @@ -1345,7 +1347,22 @@
"New stream: 'primary'\n",
"+-----------+------------+------------+------------+\n",
"| seq_num | time | motor | det |\n",
"+-----------+------------+------------+------------+\n"
"+-----------+------------+------------+------------+\n",
"| 1 | 09:18:38.3 | -5.000 | 0.000 |\n",
"| 2 | 09:18:38.4 | -3.889 | 0.001 |\n",
"| 3 | 09:18:38.4 | -2.778 | 0.021 |\n",
"| 4 | 09:18:38.4 | -1.667 | 0.249 |\n",
"| 5 | 09:18:38.4 | -0.556 | 0.857 |\n",
"| 6 | 09:18:38.4 | 0.556 | 0.857 |\n",
"| 7 | 09:18:38.5 | 1.667 | 0.249 |\n",
"| 8 | 09:18:38.5 | 2.778 | 0.021 |\n",
"| 9 | 09:18:38.5 | 3.889 | 0.001 |\n",
"| 10 | 09:18:38.5 | 5.000 | 0.000 |\n",
"+-----------+------------+------------+------------+\n",
"generator scan ['8f13258c'] (scan num: 2)\n",
"\n",
"\n",
"\n"
]
},
{
Expand Down Expand Up @@ -2140,27 +2157,6 @@
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"| 1 | 09:18:38.3 | -5.000 | 0.000 |\n",
"| 2 | 09:18:38.4 | -3.889 | 0.001 |\n",
"| 3 | 09:18:38.4 | -2.778 | 0.021 |\n",
"| 4 | 09:18:38.4 | -1.667 | 0.249 |\n",
"| 5 | 09:18:38.4 | -0.556 | 0.857 |\n",
"| 6 | 09:18:38.4 | 0.556 | 0.857 |\n",
"| 7 | 09:18:38.5 | 1.667 | 0.249 |\n",
"| 8 | 09:18:38.5 | 2.778 | 0.021 |\n",
"| 9 | 09:18:38.5 | 3.889 | 0.001 |\n",
"| 10 | 09:18:38.5 | 5.000 | 0.000 |\n",
"+-----------+------------+------------+------------+\n",
"generator scan ['8f13258c'] (scan num: 2)\n",
"\n",
"\n",
"\n"
]
},
{
"data": {
"text/plain": [
Expand All @@ -2173,7 +2169,7 @@
}
],
"source": [
"plan = plans.scan([det], motor, -5, 5, num=10)\n",
"plan = bp.scan([det], motor, -5, 5, num=10)\n",
"RE(plan)"
]
},
Expand Down Expand Up @@ -2281,13 +2277,13 @@
}
],
"source": [
"RE(plans.adaptive_scan([det], 'det', motor,\n",
" start=-15,\n",
" stop=10,\n",
" min_step=0.01,\n",
" max_step=5,\n",
" target_delta=.05,\n",
" backstep=True))"
"RE(bp.adaptive_scan([det], 'det', motor,\n",
" start=-15,\n",
" stop=10,\n",
" min_step=0.01,\n",
" max_step=5,\n",
" target_delta=.05,\n",
" backstep=True))"
]
},
{
Expand All @@ -2296,7 +2292,7 @@
"source": [
"### Including the DAQ\n",
"\n",
"What has been ignored in these examples is the inclusion of the DAQ. There are a few simple helper functions that cover basic modes of operations. The simplest behavior is just running the DAQ throughout the whole scan and stopping at the end. This can be accomlished by passing any plan you want to run this way through the `daq_wrapper` and setting the mode to `\"on\"`"
"What has been ignored in these examples is the inclusion of the DAQ. There are a few simple helper functions that cover basic modes of operations. The simplest behavior is just running the DAQ throughout the whole scan and stopping at the end. This can be accomlished by passing any plan you want to run this way through the `daq_wrapper`"
]
},
{
Expand Down Expand Up @@ -2344,14 +2340,14 @@
}
],
"source": [
"RE(plans.daq_wrapper(plans.scan([det], motor, -5, 5, num=10), mode='on'))"
"RE(bpp.daq_wrapper(bp.scan([det], motor, -5, 5, num=10)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"More complex behavior is available. For instance, if you want the DAQ to only run at certain points in your scan you can set the mode to `\"manual\"`. This requries the plan itself to start and stop the DAQ when necessary. The most common mode of operation for this mode is when performing calibration cycles. For this we can utilize the `calib_at_step` stub plan built-in into the `pcdsdaq` module. This function is then placed in the `per_step`"
"More complex behavior is available. You can have the daq run at every scan step by passing it into a scan instead of or in addition to a `det` input from any of the previous examples. The most common mode of operation for this mode is when performing calibration cycles. You can also make the daq run at very specific times by writing your own plans that do `yield from bps.trigger_and_read(daq)`."
]
},
{
Expand Down Expand Up @@ -2399,9 +2395,8 @@
}
],
"source": [
"RE(plans.daq_wrapper(plans.scan([det], motor, -5, 5, num=10,\n",
" per_step=plans.calib_at_step(events=50)),\n",
" mode='manual'))"
"daq.configure(events=120)\n",
"RE(bp.scan([daq, det], motor, -5, 5, num=10))"
]
}
],
Expand Down
22 changes: 1 addition & 21 deletions docs/source/yaml_files.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Yaml Files

``hutch-python`` uses a ``conf.yml`` file for basic configuration. This is a
standard yaml file with five valid keys:
``hutch``, ``db``, ``load``, ``experiment``, and ``daq_platform``.
``hutch``, ``db``, ``load``, and ``experiment``.


hutch
Expand Down Expand Up @@ -86,23 +86,6 @@ This key is used to force the questionnaire and experiment file to be from a
particular experiment.


daq_platform
------------

The ``daq_platform`` is another optional key that can be used to configure
which ``platform`` your running daq uses on a per-hutch or per-host basis.
The default ``platform`` is zero, but you can set a different ``platform``
for your hutch by using the ``default`` key as shown below. You can set a
platform for a particular host by using that host's name as a key as shown
below.

.. code-block:: YAML
daq_platform:
default: 4
cxi-control: 5
Full File Example
-----------------

Expand All @@ -114,6 +97,3 @@ Full File Example
load:
- xpp.beamline
daq_platform:
default: 1
30 changes: 0 additions & 30 deletions hutch_python/daq.py

This file was deleted.

10 changes: 5 additions & 5 deletions hutch_python/load_conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,12 @@
from bluesky.callbacks.best_effort import BestEffortCallback
from bluesky.utils import install_kicker
from elog import HutchELog
from pcdsdaq.daq import Daq
from pcdsdevices.mv_interface import setup_preset_paths

from . import plan_defaults
from .cache import LoadCache
from .constants import VALID_KEYS
from .daq import get_daq_objs
from .exp_load import get_exp_objs
from .happi import get_happi_objs, get_lightpath
from .namespace import class_namespace, tree_namespace
Expand Down Expand Up @@ -209,13 +209,13 @@ def load_conf(conf, hutch_dir=None):
pass

# Collect Plans
cache(plans=plan_defaults)
cache(p=plan_defaults)
cache(bp=plan_defaults.plans)
cache(bps=plan_defaults.plan_stubs)
cache(bpp=plan_defaults.preprocessors)

# Daq
with safe_load('daq'):
daq_objs = get_daq_objs(daq_platform, RE)
cache(**daq_objs)
cache(daq=Daq(RE=RE))

# Happi db and Lightpath
if db is not None:
Expand Down
43 changes: 39 additions & 4 deletions hutch_python/plan_defaults.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,39 @@
# flake8: NOQA
from bluesky.plans import *
from pcdsdaq.plans import (calib_cycle, calib_at_step,
daq_wrapper, daq_decorator)
from importlib import import_module
from inspect import isgeneratorfunction
from types import SimpleNamespace


def collect_plans(modules):
"""
Take all the plans in ``modules`` and collect them into a namespace.
Arguments
---------
modules: ``list of str``
The modules to extract plans from.
"""
plans = {}
for module_name in modules:
module = import_module(module_name)
for name, obj in module.__dict__.items():
try:
# Only include things that are natively from this module
if obj.__module__ == module_name:
try:
# Check the __wrapped__ attribute for decorators
if isgeneratorfunction(obj.__wrapped__):
plans[name] = obj
except AttributeError:
# Not a decorator, check obj
if isgeneratorfunction(obj):
plans[name] = obj
except AttributeError:
# obj did not have __module__, probably a builtin
pass
return SimpleNamespace(**plans)


plans = collect_plans(['bluesky.plans'])
plan_stubs = collect_plans(['bluesky.plan_stubs'])
preprocessors = collect_plans(['bluesky.preprocessors',
'pcdsdaq.preprocessors'])
Loading

0 comments on commit 03e6075

Please sign in to comment.