diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 1a76531ea9..d2a3ae631b 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -1,7 +1,6 @@ ### Summary -Please describe what this PR changes as concisely as possible. Link to the issue(s) -that this addresses, if any. +Please describe what this PR changes as concisely as possible. Link to the issue(s) that this addresses, if any. ### Details and comments @@ -12,15 +11,11 @@ Some details that should be in this section include: - What tests and documentation have been added/updated - What do users and developers need to know about this change -Note that this entire PR description field will be used as the commit message upon -merge, so please keep it updated along with the PR. Secondary discussions, such as -intermediate testing and bug statuses that do not affect the final PR, should be in the -PR comments. +Note that this entire PR description field will be used as the commit message upon merge, so please keep it updated along with the PR. Secondary discussions, such as intermediate testing and bug statuses that do not affect the final PR, should be in the PR comments. ### PR checklist (delete when all criteria are met) - [ ] I have read the contributing guide `CONTRIBUTING.md`. - [ ] I have added the tests to cover my changes. - [ ] I have updated the documentation accordingly. -- [ ] I have added a release note file using `reno` if this change needs to be - documented in the release notes. +- [ ] I have added a release note file using `reno` if this change needs to be documented in the release notes. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 60c55f3438..34381bc484 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -227,8 +227,8 @@ should look something like: ```yaml features: - | - Introduced a new feature foo, that adds support for doing something to - ``QuantumCircuit`` objects. It can be used by using the foo function, + Introduced a new feature foo that adds support for doing something to + :class:`~qiskit.circuit.QuantumCircuit` objects. It can be used by using the foo function, for example:: from qiskit import foo @@ -236,9 +236,9 @@ features: foo(QuantumCircuit()) - | - The ``qiskit.QuantumCircuit`` module has a new method ``foo()``. This is - the equivalent of calling the ``qiskit.foo()`` to do something to your - QuantumCircuit. This is the equivalent of running ``qiskit.foo()`` on + The :class:`~qiskit.circuit.QuantumCircuit` class has a new method :meth:`.foo`. This is + the equivalent of calling :func:`qiskit.foo` to do something to your + QuantumCircuit. This is the equivalent of running :func:`qiskit.foo` on your circuit, but provides the convenience of running it natively on an object. For example:: @@ -249,11 +249,11 @@ features: deprecations: - | - The ``qiskit.bar`` module has been deprecated and will be removed in a - future release. Its sole function, ``foobar()`` has been superseded by the - ``qiskit.foo()`` function which provides similar functionality but with + The :mod:`qiskit.bar` module has been deprecated and will be removed in a + future release. Its sole function, :func:`foobar` has been superseded by the + :func:`qiskit.foo` function which provides similar functionality but with more accurate results and better performance. You should update your calls - ``qiskit.bar.foobar()`` calls to ``qiskit.foo()``. + :func:`qiskit.bar.foobar` calls to :func:`qiskit.foo`. ``` You can also look at existing release notes for more examples. @@ -348,9 +348,12 @@ There are a few other build options available: Qiskit Experiments is part of Qiskit and, therefore, the [Qiskit Deprecation Policy](https://qiskit.org/documentation/contributing_to_qiskit.html#deprecation-policy) fully applies here. We have a deprecation decorator for showing deprecation warnings. To -deprecate a function: +deprecate a function, for example: ```python + + from qiskit_experiments.warnings import deprecated_function + @deprecated_function(last_version="0.3", msg="Use new_function instead.") def old_function(*args, **kwargs): pass @@ -361,6 +364,8 @@ deprecate a function: To deprecate a class: ```python + from qiskit_experiments.warnings import deprecated_class + @deprecated_class(last_version="0.3", new_cls=NewCls) class OldClass: pass @@ -408,5 +413,5 @@ following steps: 4. Generate a PR on the meta-repository to bump the qiskit-experiments version and meta-package version. -The `stable/*` branches should only receive changes in the form of bug fixes. +The `stable/*` branches should only receive changes in the form of bug fixes. If you're making a bug fix PR that you believe should be backported to the current stable release, tag it with `backport stable potential`. diff --git a/docs/GUIDELINES.md b/docs/GUIDELINES.md index 8ed4ba4a45..32fa658947 100644 --- a/docs/GUIDELINES.md +++ b/docs/GUIDELINES.md @@ -6,6 +6,8 @@ Contents: - [Guidelines for writing documentation](#guidelines-for-writing-documentation) - [Introduction](#introduction) - [General formatting guidelines](#general-formatting-guidelines) + - [Writing code](#writing-code) + - [Referencing objects](#referencing-objects) - [Tutorials](#tutorials) - [How-to guides](#how-to-guides) - [Experiment manuals](#experiment-manuals) @@ -23,6 +25,22 @@ Qiskit Experiments documentation is split into four sections: - Experiment manuals for information on specific experiments - API reference for technical documentation +### General formatting guidelines + +* For experiments, the documentation title should be just the name of the experiment. Use + regular capitalization +* Use headers, subheaders, subsubheaders etc. for hierarchical text organization. No + need to number the headers +* Use present progressive for subtitles, such as "Saving experiment data to the + database" instead of "Save experiment data to the database" +* Use math notation as much as possible (e.g. use $\frac{\pi}{2}$ instead of pi-half or + pi/2) +* Use device names as shown in the IBM Quantum Services dashboard, e.g. `ibmq_lima` + instead of IBMQ Lima +* put identifier names (e.g. osc_freq) in code blocks using double backticks, i.e. `osc_freq` + +### Writing code + All documentation is written in reStructuredText format and then built into formatted text by Sphinx. Code cells can be written using `jupyter-execute` blocks, which will be automatically executed, with both code and output shown to the user: @@ -31,36 +49,29 @@ automatically executed, with both code and output shown to the user: # write Python code here -Your code should use the appropriate mock backend to show what expected experiment -results might look like for the user. To instantiate a mock backend without exposing it -to the user, use the `:hide-code:` and `:hide-output:` directives: - - .. jupyter-execute:: - :hide-code: - :hide-output: - - from qiskit.test.ibmq_mock import mock_get_backend - backend = mock_get_backend('FakeLima') - To display a block without actually executing the code, use the `.. jupyter-input::` and `.. jupyter-output::` directives. To ignore an error from a Jupyter cell block, use the `:raises:` directive. To see more options, consult the [Jupyter Sphinx documentation](https://jupyter-sphinx.readthedocs.io/en/latest/). -### General formatting guidelines +### Referencing objects + +Modules, classes, methods, functions, and attributes mentioned in the documentation +should link to their API documentation whenever possible using the `:mod:`, `:class:`, +`:meth:`, `:func:`, and `:attr:` directives followed by the name of the object in single +backticks. Here are some common usage patterns: + +- `` :class:`.CurveAnalysis` ``: This will render a link to the curve analysis class + `CurveAnalysis` if its name is unique. +- `` :class:`qiskit_experiments.curve_analysis.CurveAnalysis` ``: This will render the + full path to the object with a link as long as the path is correct. +- `` :class:`~qiskit_experiments.curve_analysis.CurveAnalysis` ``: This will render only + the object name itself instead of the full path. It's simpler to use the first pattern + instead if the name is unique. + +Consult the [Sphinx documentation](https://www.sphinx-doc.org/en/master/usage/restructuredtext/domains.html) for more detailed syntax. -* For experiments, documentation title should be just the name of the experiment. Use - regular capitalization. -* Use headers, subheaders, subsubheaders etc. for hierarchical text organization. No - need to number the headers -* Use present progressive for subtitles, such as "Saving experiment data to the - database" instead of "Save experiment data to the database" -* Use math notation as much as possible (e.g. use $\frac{\pi}{2}$ instead of pi-half or - pi/2) -* Use device names as shown in the IBM Quantum Services dashboard, e.g. `ibmq_lima` - instead of IBMQ Lima -* put identifier names (e.g. osc_freq) in code blocks using double backticks, i.e. `osc_freq` -Below we provide templates and guidelines for each of these types of documentation. +Below are templates and guidelines for each of these types of documentation. ### Tutorials diff --git a/docs/_ext/autoref.py b/docs/_ext/autoref.py index 4d303d0246..5f78b23f58 100644 --- a/docs/_ext/autoref.py +++ b/docs/_ext/autoref.py @@ -30,6 +30,7 @@ class WebSite(Directive): .. ref_website:: qiskit-experiments, https://github.com/Qiskit/qiskit-experiments """ + required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True @@ -67,6 +68,7 @@ class Arxiv(Directive): If an article is not found, no journal information will be shown. """ + required_arguments = 2 optional_arguments = 0 final_argument_whitespace = False @@ -95,7 +97,7 @@ def run(self): if journal: ret_node += nodes.Text(journal) ret_node += nodes.Text(" ") - ret_node += nodes.reference(text="(open)", refuri=paper.pdf_url) + ret_node += nodes.reference(text="(open)", refuri=paper.entry_id) return [ret_node] diff --git a/docs/conf.py b/docs/conf.py index 1085b52310..919aec8fbc 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -10,13 +10,6 @@ # copyright notice, and modified files need to carry a notice indicating # that they have been altered from the originals. -# pylint: disable=invalid-name -# Configuration file for the Sphinx documentation builder. -# -# This file does only contain a selection of the most common options. For a -# full list see the documentation: -# http://www.sphinx-doc.org/en/master/config - """ Sphinx documentation builder. """ @@ -53,13 +46,6 @@ # -- General configuration --------------------------------------------------- -# If your documentation needs a minimal Sphinx version, state it here. -# -# needs_sphinx = '1.0' - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. extensions = [ "sphinx.ext.napoleon", "sphinx.ext.autodoc", @@ -94,11 +80,11 @@ # These should ideally be automatically generated using a custom macro to specify # chosen cells for thumbnails, like the nbsphinx-gallery tag nbsphinx_thumbnails = { - "manuals/benchmarking/quantum_volume": "_images/quantum_volume_2_0.png", + "manuals/verification/quantum_volume": "_images/quantum_volume_2_0.png", "manuals/measurement/readout_mitigation": "_images/readout_mitigation_4_0.png", - "manuals/benchmarking/randomized_benchmarking": "_images/randomized_benchmarking_3_1.png", + "manuals/verification/randomized_benchmarking": "_images/randomized_benchmarking_3_1.png", "manuals/measurement/restless_measurements": "_images/restless_shots.png", - "manuals/benchmarking/state_tomography": "_images/state_tomography_3_0.png", + "manuals/verification/state_tomography": "_images/state_tomography_3_0.png", "manuals/characterization/t1": "_images/t1_0_0.png", "manuals/characterization/t2ramsey": "_images/t2ramsey_4_0.png", "manuals/characterization/tphi": "_images/tphi_5_1.png", @@ -121,11 +107,7 @@ # strings that are used for format of figure numbers. As a special character, # %s will be replaced to figure number. numfig_format = {"table": "Table %s"} -# The language for content autogenerated by Sphinx. Refer to documentation -# for a list of supported languages. -# -# This is also used if you do content translation via gettext catalogs. -# Usually you set "language" from the command line for these cases. + language = "en" # List of patterns, relative to source directory, that match files and @@ -185,9 +167,6 @@ if os.getenv("EXPERIMENTS_DEV_DOCS", None): rst_prolog = """ -.. raw:: html - -


.. note:: This is the documentation for the current state of the development branch of Qiskit Experiments. The documentation or APIs here can change prior to being diff --git a/docs/howtos/cloud_service.rst b/docs/howtos/cloud_service.rst index ec9a68f093..a3638fa039 100644 --- a/docs/howtos/cloud_service.rst +++ b/docs/howtos/cloud_service.rst @@ -36,7 +36,7 @@ backend and not a simulator to be able to save the experiment data. This is done t1_delays = np.arange(1e-6, 600e-6, 50e-6) - exp = T1(qubit=0, delays=t1_delays) + exp = T1(physical_qubits=(0,), delays=t1_delays) t1_expdata = exp.run(backend=backend).block_for_results() t1_expdata.save() @@ -131,7 +131,7 @@ The :meth:`~.ExperimentData.auto_save` feature automatically saves changes to th .. jupyter-input:: - exp = T1(qubit=0, delays=t1_delays) + exp = T1(physical_qubits=(0,), delays=t1_delays) t1_expdata = exp.run(backend=backend, shots=1000) t1_expdata.auto_save = True diff --git a/docs/howtos/job_splitting.rst b/docs/howtos/job_splitting.rst index 0785551e67..495841b174 100644 --- a/docs/howtos/job_splitting.rst +++ b/docs/howtos/job_splitting.rst @@ -15,7 +15,7 @@ You can set the ``max_circuits`` option manually when running an experiment: .. jupyter-input:: - exp = Experiment([0]) + exp = Experiment((0,)) exp.set_experiment_options(max_circuits=100) The experiment class will split its circuits into jobs such that no job has more than diff --git a/docs/howtos/new_experimentdata.rst b/docs/howtos/new_experimentdata.rst deleted file mode 100644 index 24dcc11fd7..0000000000 --- a/docs/howtos/new_experimentdata.rst +++ /dev/null @@ -1,87 +0,0 @@ -Instantiate a new data object for an existing experiment -======================================================== - -Problem -------- - -You want to instantiate a new :class:`.ExperimentData` object from an existing -experiment whose jobs have finished execution successfully. - -Solution --------- - -.. note:: - This guide requires :mod:`qiskit-ibm-provider`. For how to migrate from the deprecated :mod:`qiskit-ibmq-provider` to :mod:`qiskit-ibm-provider`, - consult the `migration guide `_.\ - -Use the code template below. You need to recreate the exact experiment you ran and its -options, as well as the IDs of the jobs that were executed. The jobs must be accessible -through the provider that you use. - -.. jupyter-input:: - - from qiskit_experiments.framework import ExperimentData - from qiskit_ibm_provider import IBMProvider - - # The experiment you ran - experiment = Experiment(**opts) - - # List of job IDs for the experiment - job_ids= [job1, job2, ...] - - provider = IBMProvider() - - data = ExperimentData(experiment = experiment) - data.add_jobs([provider.retrieve_job(job_id) for job_id in job_ids]) - experiment.analysis.run(data) - - # Block execution of subsequent code until analysis is complete - data.block_for_results() - -``data`` will be the new experiment data object. - -Discussion ----------- - -This guide is helpful for cases such as a lost connection during experiment execution, -where the jobs may have finished running on the remote backends but the -:class:`.ExperimentData` class returned upon completion of an experiment does not -contain correct results. - -Recreation of the experiment object is often done by rerunning the code that you ran -previously to create it. It may sometimes be helpful instead to save an experiment and -restore it later with the following lines of code: - -.. jupyter-input:: - - serialized_exp = json.dumps(Experiment.config()) - Experiment.from_config(json.loads(serialized_exp)) - -You may also want to rerun the analysis with different options of a previously-run -experiment when you instantiate this new :class:`.ExperimentData` object. Here's a code -snippet where we reconstruct a parallel experiment consisting of randomized benchmarking -experiments, then change the gate error ratio as well as the line plot color of the -first component experiment. - -.. jupyter-input:: - - pexp = ParallelExperiment([ - StandardRB((i,), np.arange(1, 800, 200), num_samples=10) for i in range(2)]) - - pexp.analysis.component_analysis(0).options.gate_error_ratio = { - "x": 10, "sx": 1, "rz": 0 - } - pexp.analysis.component_analysis(0).plotter.figure_options.series_params.update( - { - "rb_decay": {"color": "r"} - } - ) - - data = ExperimentData(experiment=pexp) - data.add_jobs([provider.retrieve_job(job_id) for job_id in job_ids]) - pexp.analysis.run(data) - -See Also --------- - -* `Saving and loading experiment data with the cloud service `_ diff --git a/docs/howtos/rerun_analysis.rst b/docs/howtos/rerun_analysis.rst new file mode 100644 index 0000000000..1ea7b9429d --- /dev/null +++ b/docs/howtos/rerun_analysis.rst @@ -0,0 +1,121 @@ +Rerun analysis for an existing experiment +========================================= + +Problem +------- + +You want to rerun the analysis, possibly with different options, and generate a new +:class:`.ExperimentData` object for an existing experiment whose jobs have finished +execution successfully. + +Solution +-------- + +.. note:: + Some of this guide uses the :mod:`qiskit-ibm-provider` package. For how to migrate from + the deprecated ``qiskit-ibmq-provider`` to ``qiskit-ibm-provider``, consult the + `migration guide `_.\ + +Once you recreate the exact experiment you ran and all of its parameters and options, +you can call the :meth:`.add_jobs` method with a list of :class:`Job +` objects to generate the new :class:`.ExperimentData` object. +The following example retrieves jobs from a provider that has access to them via their +job IDs: + +.. jupyter-input:: + + from qiskit_experiments.framework import ExperimentData + from qiskit_ibm_provider import IBMProvider + + # The experiment you ran + experiment = Experiment(**opts) + + # List of job IDs for the experiment + job_ids= [job1, job2, ...] + + provider = IBMProvider() + + expdata = ExperimentData(experiment = experiment) + expdata.add_jobs([provider.retrieve_job(job_id) for job_id in job_ids]) + experiment.analysis.run(expdata) + + # Block execution of subsequent code until analysis is complete + expdata.block_for_results() + +``expdata`` will be the new experiment data object containing results of the rerun analysis. + +If you have the job data in the form of a :class:`~qiskit.result.Result` object, you can +invoke the :meth:`.add_data` method instead of :meth:`.add_jobs`: + +.. jupyter-input:: + + data.add_data([provider.retrieve_job(job_id).result() for job_id in job_ids]) + +The remaining workflow remains the same. + +Note that for a composite experiment, you only need to run these code snippets for the +parent experiment. The child experiment data will automatically populate. + +Discussion +---------- + +This guide is helpful for cases such as a lost connection during experiment +execution, where the jobs may have finished running on the remote backends but the +:class:`.ExperimentData` class returned upon completion of an experiment does not +contain correct results. + +In the case where jobs are not directly accessible from the provider but you've +downloaded the jobs from the +`IQS dashboard `_, you can load them from +the downloaded directory into :class:`~qiskit.result.Result` objects with this code: + +.. jupyter-input:: + + import json + from pathlib import Path + + from qiskit.result import Result + + result_dict = json.loads(next(Path('.').glob("*-result.txt")).read_text()) + result = Result.from_dict(result_dict) + +Recreation of the experiment object is often done by rerunning the code that you ran +previously to create it. It may sometimes be helpful instead to save an experiment and +restore it later with the following lines of code: + +.. jupyter-input:: + + serialized_exp = json.dumps(Experiment.config()) + Experiment.from_config(json.loads(serialized_exp)) + +Rerunning with different analysis options +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +You may also want to rerun the analysis with different options of a previously-run +experiment when you instantiate this new :class:`.ExperimentData` object. Here's a code +snippet where we reconstruct a parallel experiment consisting of randomized benchmarking +experiments, then change the gate error ratio as well as the line plot color of the +first component experiment. + +.. jupyter-input:: + + pexp = ParallelExperiment([ + StandardRB((i,), np.arange(1, 800, 200), num_samples=10) for i in range(2)]) + + pexp.analysis.component_analysis(0).options.gate_error_ratio = { + "x": 10, "sx": 1, "rz": 0 + } + pexp.analysis.component_analysis(0).plotter.figure_options.series_params.update( + { + "rb_decay": {"color": "r"} + } + ) + + data = ExperimentData(experiment=pexp) + data.add_jobs([provider.retrieve_job(job_id) for job_id in job_ids]) + pexp.analysis.run(data) + +See Also +-------- + +* `Saving and loading experiment data with the cloud service `_ diff --git a/docs/index.rst b/docs/index.rst index e3b0686a9b..d324676fea 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -48,7 +48,7 @@ We've divided up the documentation into four sections with different purposes: These standalone how-to guides provide short and direct solutions to some commonly asked questions for Qiskit Experiments users. You'll find in these guides: - * How to :doc:`re-instantiate experiment data for an existing experiment ` + * How to :doc:`rerun analysis for an existing experiment ` * How to :doc:`customize the splitting of circuits into jobs ` +++ @@ -68,7 +68,7 @@ We've divided up the documentation into four sections with different purposes: background, principle, and how to run them in Qiskit Experiments. You'll find in these manuals: - * How to analyze 1- and 2-qubit errors in :doc:`randomized benchmarking ` + * How to analyze 1- and 2-qubit errors in :doc:`randomized benchmarking ` * How to calculate the speedup from using :doc:`restless measurements ` +++ diff --git a/docs/manuals/characterization/t1.rst b/docs/manuals/characterization/t1.rst index c25ff1c9da..866be29415 100644 --- a/docs/manuals/characterization/t1.rst +++ b/docs/manuals/characterization/t1.rst @@ -61,7 +61,7 @@ for qubit 0. delays = np.arange(1e-6, 3 * qubit0_t1, 3e-5) # Create an experiment for qubit 0 # with the specified time intervals - exp = T1(physical_qubits=[0], delays=delays) + exp = T1(physical_qubits=(0,), delays=delays) # Set scheduling method so circuit is scheduled for delay noise simulation exp.set_transpile_options(scheduling_method='asap') @@ -118,7 +118,7 @@ that is close to a logical value '0'. ) # Creating a T1 experiment - expT1_kerneled = T1([0], delays) + expT1_kerneled = T1((0,), delays) expT1_kerneled.analysis = T1KerneledAnalysis() expT1_kerneled.analysis.set_options(p0={"amp": 1, "tau": t1[0] + t1_estimated_shift, "base": 0}) diff --git a/docs/manuals/characterization/t2hahn.rst b/docs/manuals/characterization/t2hahn.rst index 2eef6441e1..6170f60ea3 100644 --- a/docs/manuals/characterization/t2hahn.rst +++ b/docs/manuals/characterization/t2hahn.rst @@ -70,7 +70,7 @@ and can analytically extract the desired values. number_of_echoes = 1 # Create a T2Hahn experiment. Print the first circuit as an example - exp1 = T2Hahn(physical_qubits=[qubit], delays=delays, num_echoes=number_of_echoes) + exp1 = T2Hahn(physical_qubits=(qubit,), delays=delays, num_echoes=number_of_echoes) print(exp1.circuits()[0]) diff --git a/docs/manuals/characterization/t2ramsey.rst b/docs/manuals/characterization/t2ramsey.rst index 65e1cd2026..661fc38ffb 100644 --- a/docs/manuals/characterization/t2ramsey.rst +++ b/docs/manuals/characterization/t2ramsey.rst @@ -52,7 +52,7 @@ resulting function, and can analytically extract the desired values. .. jupyter-execute:: # Create a T2Ramsey experiment. Print the first circuit as an example - exp1 = T2Ramsey([qubit], delays, osc_freq=1e5) + exp1 = T2Ramsey((qubit,), delays, osc_freq=1e5) print(exp1.circuits()[0]) @@ -121,7 +121,7 @@ computed for other qubits. "phi": 0, "B": 0.5 } - exp_with_p0 = T2Ramsey([qubit], delays, osc_freq=1e5) + exp_with_p0 = T2Ramsey((qubit,), delays, osc_freq=1e5) exp_with_p0.analysis.set_options(p0=user_p0) exp_with_p0.set_transpile_options(scheduling_method='asap') expdata_with_p0 = exp_with_p0.run(backend=backend, shots=2000, seed_simulator=101) diff --git a/docs/manuals/characterization/tphi.rst b/docs/manuals/characterization/tphi.rst index 93d8cad789..3b81819feb 100644 --- a/docs/manuals/characterization/tphi.rst +++ b/docs/manuals/characterization/tphi.rst @@ -49,7 +49,7 @@ relaxation time estimate. We can see that the component experiments of the batch .. jupyter-execute:: - exp = Tphi(physical_qubits=[0], delays_t1=delays_t1, delays_t2=delays_t2, num_echoes=1) + exp = Tphi(physical_qubits=(0,), delays_t1=delays_t1, delays_t2=delays_t2, num_echoes=1) exp.component_experiment(0).circuits()[-1].draw("mpl") .. jupyter-execute:: @@ -84,7 +84,7 @@ experiment: .. jupyter-execute:: - exp = Tphi(physical_qubits=[0], + exp = Tphi(physical_qubits=(0,), delays_t1=delays_t1, delays_t2=delays_t2, t2type="ramsey", diff --git a/docs/manuals/index.rst b/docs/manuals/index.rst index a8117496d5..dfc66bc22e 100644 --- a/docs/manuals/index.rst +++ b/docs/manuals/index.rst @@ -4,20 +4,21 @@ Experiment Manuals These experiment manuals are in-depth dives into individual experiments, their operational principles, and how to run them in Qiskit Experiments. -.. _benchmarking: +.. _verification manuals: -Benchmarking Experiments +Verification Experiments ------------------------ -These experiments measure your device performance according to a set of defined -metrics, such as the space-time volume of circuits that can be successfully executed. +These experiments measure and verify your device performance according to a set of +defined metrics, such as the space-time volume of circuits that can be successfully +executed. .. nbgallery:: :glob: - benchmarking/* + verification/* -.. _qubit characterization: +.. _qubit characterization manuals: Qubit Characterization Experiments ---------------------------------- @@ -29,7 +30,7 @@ These experiment measure specific properties of a qubit. characterization/* -.. _measurement-related: +.. _measurement-related manuals: Measurement-Related Experiments ------------------------------- diff --git a/docs/manuals/benchmarking/quantum_volume.rst b/docs/manuals/verification/quantum_volume.rst similarity index 95% rename from docs/manuals/benchmarking/quantum_volume.rst rename to docs/manuals/verification/quantum_volume.rst index e57a192c2c..91b4a5ed84 100644 --- a/docs/manuals/benchmarking/quantum_volume.rst +++ b/docs/manuals/verification/quantum_volume.rst @@ -53,11 +53,11 @@ backend and on an ideal simulator: - ``seed``: Seed or generator object for random number generation. If ``None`` then ``default_rng`` will be used. -- ``simulation_backend``: The simulator backend to use to generate the - expected results. the simulator must have a ``save_probabilities`` - method. If None ``AerSimulator`` simulator will be used (in case - ``AerSimulator`` is not installed ``qiskit.quantum_info.Statevector`` - will be used). +- ``simulation_backend``: The simulator backend to use to generate the expected + results. the simulator must have a ``save_probabilities`` method. If None, + :class:`~qiskit_aer.AerSimulator` will be used (in case + :class:`~qiskit_aer.AerSimulator` is not installed, + :class:`~qiskit.quantum_info.Statevector` will be used). **Note:** In some cases, 100 trials are not enough to obtain a QV greater than 1 for the specified number of qubits. In this case, adding @@ -138,7 +138,7 @@ Calculating Quantum Volume using a batch experiment Run the QV experiment with an increasing number of qubits to check what is the maximum Quantum Volume for the specific device. To reach the real system’s Quantum Volume, one must run more trials and additional -enhancements might be required (See Ref. [2] for details). +enhancements might be required (See Ref. [2]_ for details). .. jupyter-execute:: diff --git a/docs/manuals/benchmarking/randomized_benchmarking.rst b/docs/manuals/verification/randomized_benchmarking.rst similarity index 99% rename from docs/manuals/benchmarking/randomized_benchmarking.rst rename to docs/manuals/verification/randomized_benchmarking.rst index a1c6fa874c..76a158f0a6 100644 --- a/docs/manuals/benchmarking/randomized_benchmarking.rst +++ b/docs/manuals/verification/randomized_benchmarking.rst @@ -213,7 +213,7 @@ The default RB circuit output shows Clifford blocks: .. jupyter-execute:: # Run an RB experiment on qubit 0 - exp = StandardRB(physical_qubits=[0], lengths=[2], num_samples=1, seed=seed) + exp = StandardRB(physical_qubits=(0,), lengths=[2], num_samples=1, seed=seed) c = exp.circuits()[0] c.draw("mpl") diff --git a/docs/manuals/benchmarking/state_tomography.rst b/docs/manuals/verification/state_tomography.rst similarity index 100% rename from docs/manuals/benchmarking/state_tomography.rst rename to docs/manuals/verification/state_tomography.rst diff --git a/docs/tutorials/calibrations.rst b/docs/tutorials/calibrations.rst index 9021a6887e..af21bdb034 100644 --- a/docs/tutorials/calibrations.rst +++ b/docs/tutorials/calibrations.rst @@ -213,7 +213,7 @@ for both the :math:`X` pulse and the :math:`SX` pulse using a single experiment. .. jupyter-execute:: from qiskit_experiments.library.calibration import RoughXSXAmplitudeCal - rabi = RoughXSXAmplitudeCal([qubit], cals, backend=backend, amplitudes=np.linspace(-0.1, 0.1, 51)) + rabi = RoughXSXAmplitudeCal((qubit,), cals, backend=backend, amplitudes=np.linspace(-0.1, 0.1, 51)) The rough amplitude calibration is therefore a Rabi experiment in which each circuit contains a pulse with a gate. Different circuits correspond to pulses diff --git a/docs/tutorials/curve_analysis.rst b/docs/tutorials/curve_analysis.rst index 3cfc82bd4d..0998e9bbd1 100644 --- a/docs/tutorials/curve_analysis.rst +++ b/docs/tutorials/curve_analysis.rst @@ -163,19 +163,25 @@ Here is another example how to implement multi-objective optimization task: lmfit.models.ExpressionModel( expr="amp * exp(-alpha1 * x) + base", name="my_experiment1", - data_sort_key={"tag": 1}, ), lmfit.models.ExpressionModel( expr="amp * exp(-alpha2 * x) + base", name="my_experiment2", - data_sort_key={"tag": 2}, ), ] -Note that now you need to provide ``data_sort_key`` which is unique argument to -Qiskit curve analysis. This specifies the metadata of your experiment circuit +In addition, you need to provide ``data_subfit_map`` analysis option, which may look like + +.. jupyter-input:: + + data_subfit_map = { + "my_experiment1": {"tag": 1}, + "my_experiment2": {"tag": 2}, + } + +This option specifies the metadata of your experiment circuit that is tied to the fit model. If multiple models are provided without this option, -the curve fitter cannot prepare data to fit. +the curve fitter cannot prepare the data for fitting. In this model, you have four parameters (``amp``, ``alpha1``, ``alpha2``, ``base``) and the two curves share ``amp`` (``base``) for the amplitude (baseline) in the exponential decay function. @@ -192,12 +198,10 @@ By using this model, one can flexibly set up your fit model. Here is another exa lmfit.models.ExpressionModel( expr="amp * cos(2 * pi * freq * x + phi) + base", name="my_experiment1", - data_sort_key={"tag": 1}, ), lmfit.models.ExpressionModel( expr="amp * sin(2 * pi * freq * x + phi) + base", name="my_experiment2", - data_sort_key={"tag": 2}, ), ] @@ -253,9 +257,9 @@ This code will give you identical fit model to the one defined in the following ) However, note that you can also inherit other features, e.g. the algorithm to -generate initial guesses for parameters, from the :class:`AnalysisA` in the first example. +generate initial guesses for parameters, from the ``AnalysisA`` class in the first example. On the other hand, in the latter case, you need to manually copy and paste -every logic defined in the :class:`AnalysisA`. +every logic defined in ``AnalysisA``. .. _curve_analysis_workflow: @@ -268,7 +272,7 @@ This workflow is defined in the method :meth:`CurveAnalysis._run_analysis`. 1. Initialization ^^^^^^^^^^^^^^^^^ -Curve analysis calls :meth:`_initialization` method where it initializes +Curve analysis calls the :meth:`_initialization` method, where it initializes some internal states and optionally populate analysis options with the input experiment data. In some case it may train the data processor with fresh outcomes, diff --git a/docs/tutorials/custom_experiment.rst b/docs/tutorials/custom_experiment.rst index ff0ec51f10..9a6b888ffe 100644 --- a/docs/tutorials/custom_experiment.rst +++ b/docs/tutorials/custom_experiment.rst @@ -10,7 +10,7 @@ the :class:`.BaseExperiment` class. We will discuss both cases in this tutorial. In general, to subclass :class:`.BaseExperiment` class, you should: - Implement the abstract :meth:`.BaseExperiment.circuits` method. - This should return a list of :class:`~qiskit.QuantumCircuit` objects defining + This should return a list of :class:`~qiskit.circuit.QuantumCircuit` objects defining the experiment payload. - Call the :meth:`.BaseExperiment.__init__` method during the subclass diff --git a/docs/tutorials/getting_started.rst b/docs/tutorials/getting_started.rst index 15ada32f8c..852f61fe6b 100644 --- a/docs/tutorials/getting_started.rst +++ b/docs/tutorials/getting_started.rst @@ -7,8 +7,8 @@ Installation Qiskit Experiments is built on top of Qiskit, so we recommend that you first install Qiskit following its :external+qiskit:doc:`installation guide `. Qiskit -Experiments supports the same platforms and Python versions (currently 3.7+) as Qiskit -itself. +Experiments supports the same platforms and Python versions (currently **3.7+**) as +Qiskit itself. Qiskit Experiments releases can be installed via the Python package manager ``pip``: @@ -66,7 +66,14 @@ IBM backend, real or simulated, that you can access through Qiskit. All experiments require a ``physical_qubits`` parameter as input that specifies which physical qubit or qubits the circuits will be executed on. The qubits must be given as a -Python sequence (usually a tuple or a list). In addition, the :math:`T_1` experiment has +Python sequence (usually a tuple or a list). + +.. note:: + Since 0.5.0, using ``qubits`` instead of ``physical_qubits`` or specifying an + integer qubit index instead of a one-element sequence for a single-qubit experiment + is deprecated. + +In addition, the :math:`T_1` experiment has a second required parameter, ``delays``, which is a list of times in seconds at which to measure the excited state population. In this example, we'll run the :math:`T_1` experiment on qubit 0, and use the ``t1`` backend property of this qubit to give us a @@ -200,11 +207,11 @@ The actual backend jobs that were executed for the experiment can be accessed wi :meth:`~.ExperimentData.jobs` method. .. note:: - See the how-tos for :doc:`instantiating a new ExperimentData object ` - from an existing experiment that finished execution. + See the how-tos for :doc:`rerunning the analysis ` + for an existing experiment that finished execution. -Setting experiment options -========================== +Setting options for your experiment +=================================== It's often insufficient to run an experiment with only its default options. There are four types of options one can set for an experiment: @@ -222,8 +229,8 @@ supports can be set: meas_level=MeasLevel.CLASSIFIED, meas_return="avg") -Consult the documentation of :meth:`qiskit.execute_function` or the run method of your -specific backend type for valid options. +Consult the documentation of :func:`qiskit.execute_function.execute` or the run method +of your specific backend type for valid options. Transpile options ----------------- @@ -242,7 +249,7 @@ Experiment options ------------------ These options are unique to each experiment class. Many experiment options can be set upon experiment instantiation, but can also be explicitly set via -:meth:`~BaseExperiment.set_experiment_options`: +:meth:`~.BaseExperiment.set_experiment_options`: .. jupyter-input:: diff --git a/docs/tutorials/visualization.rst b/docs/tutorials/visualization.rst index cd32209d01..0a6e7d61ee 100644 --- a/docs/tutorials/visualization.rst +++ b/docs/tutorials/visualization.rst @@ -55,7 +55,7 @@ First, we display the default figure from a :class:`.Rabi` experiment as a start backend = SingleTransmonTestBackend() rabi = Rabi( - qubit=0, + physical_qubits=(0,), backend=backend, schedule=sched, amplitudes=np.linspace(-0.1, 0.1, 21) @@ -64,7 +64,7 @@ First, we display the default figure from a :class:`.Rabi` experiment as a start rabi_data = rabi.run().block_for_results() rabi_data.figure(0) -This is the default figure generated by :class:`OscillationAnalysis`, the data analysis +This is the default figure generated by :class:`.OscillationAnalysis`, the data analysis class for the Rabi experiment. The fitted cosine is shown as a blue line, with the individual measurements from the experiment shown as data points with error bars corresponding to their uncertainties. We are also given a small fit report in the caption showing the @@ -130,7 +130,7 @@ to see what the default figure looks like: drag_experiment_helper = DragHelper(gate_name="Drag(xp)") backend = MockIQBackend(drag_experiment_helper) - drag = RoughDrag(0, xp, backend=backend) + drag = RoughDrag((0,), xp, backend=backend) drag_data = drag.run().block_for_results() drag_data.figure(0) @@ -139,7 +139,7 @@ Now we specify the figure options before running the experiment for a second tim .. jupyter-execute:: - drag = RoughDrag(0, xp, backend=backend) + drag = RoughDrag((0,), xp, backend=backend) # Set plotter options plotter = drag.analysis.plotter @@ -212,8 +212,7 @@ to label the IQ points as one of the three prepared states. :class:`.IQPlotter` plotting a discriminator as optional supplementary data, which will show predicted series over the axis area. -.. jupyter-execute:: - +.. jupyter-input:: drag_experiment_helper = DragHelper(gate_name="Drag(xp)") backend = MockIQBackend(drag_experiment_helper) @@ -251,32 +250,32 @@ series over the axis area. ) return options - @property - def plotter(self) -> BasePlotter: - return self.options.plotter - - def _run_analysis(self, experiment_data): - data = experiment_data.data() - analysis_results = [] - for datum in data: - # Analysis code - analysis_results.append(self._analysis_result(datum)) - - # Plotting code - series_name = datum["metadata"]["name"] - points = datum["memory"] - centroid = np.mean(points, axis=0) - self.plotter.set_series_data( - series_name, - points=points, - centroid=centroid, - ) - - # Add discriminator to IQPlotter - discriminator = self._train_discriminator(data) - self.plotter.set_supplementary_data(discriminator=discriminator) - - return analysis_results, [self.plotter.figure()] + @property + def plotter(self) -> BasePlotter: + return self.options.plotter + + def _run_analysis(self, experiment_data): + data = experiment_data.data() + analysis_results = [] + for datum in data: + # Analysis code + analysis_results.append(self._analysis_result(datum)) + + # Plotting code + series_name = datum["metadata"]["name"] + points = datum["memory"] + centroid = np.mean(points, axis=0) + self.plotter.set_series_data( + series_name, + points=points, + centroid=centroid, + ) + + # Add discriminator to IQPlotter + discriminator = self._train_discriminator(data) + self.plotter.set_supplementary_data(discriminator=discriminator) + + return analysis_results, [self.plotter.figure()] If we run the above analysis on some appropriate experiment data, as previously described, our class will generate a figure showing IQ points and their centroids. diff --git a/qiskit_experiments/calibration_management/base_calibration_experiment.py b/qiskit_experiments/calibration_management/base_calibration_experiment.py index 242a0067d1..985f39a060 100644 --- a/qiskit_experiments/calibration_management/base_calibration_experiment.py +++ b/qiskit_experiments/calibration_management/base_calibration_experiment.py @@ -61,14 +61,14 @@ class should be this mixin and the second class should be the characterization .. code-block:: python - RoughFrequency(BaseCalibrationExperiment, QubitSpectroscopy) + RoughFrequencyCal(BaseCalibrationExperiment, QubitSpectroscopy) - This ensures that the :meth:`run` method of :class:`.RoughFrequency` will be the + This ensures that the ``run`` method of :class:`.RoughFrequencyCal` will be the run method of the :class:`.BaseCalibrationExperiment` class. Furthermore, developers must explicitly call the :meth:`__init__` methods of both parent classes. Developers should strive to follow the convention that the first two arguments of - a calibration experiment are the qubit(s) and the :class:`.Calibration` instance. + a calibration experiment are the qubit(s) and the :class:`.Calibrations` instance. If the experiment uses custom schedules, which is typically the case, then developers may chose to use the :meth:`get_schedules` method when creating the @@ -133,8 +133,8 @@ def __init__( updater: The updater class that updates the Calibrations instance. Different calibration experiments will use different updaters. auto_update: If set to True (the default) then the calibrations will automatically be - updated once the experiment has run and :meth:`block_for_results()` will be called. - kwargs: Key word arguments for the characterization class. + updated once the experiment has run and :meth:`.block_for_results` will be called. + kwargs: Keyword arguments for the characterization class. """ super().__init__(*args, **kwargs) self._cals = calibrations diff --git a/qiskit_experiments/calibration_management/basis_gate_library.py b/qiskit_experiments/calibration_management/basis_gate_library.py index 20f37026bf..30782331e0 100644 --- a/qiskit_experiments/calibration_management/basis_gate_library.py +++ b/qiskit_experiments/calibration_management/basis_gate_library.py @@ -120,8 +120,9 @@ def default_values(self) -> List[DefaultCalValue]: """Return the default values for the parameters. Returns - A list of tuples is returned. These tuples are structured so that instances of - :class:`.Calibrations` can call :meth:`.add_parameter_value` on the tuples. + A list of tuples is returned. These tuples are structured so that instances + of :class:`.Calibrations` can call :meth:`.Calibrations.add_parameter_value` + on the tuples. """ @abstractmethod @@ -286,8 +287,9 @@ def default_values(self) -> List[DefaultCalValue]: """Return the default values for the parameters. Returns - A list of tuples is returned. These tuples are structured so that instances of - :class:`.Calibrations` can call :meth:`.add_parameter_value` on the tuples. + A list of tuples is returned. These tuples are structured so that instances + of :class:`.Calibrations` can call :meth:`.Calibrations.add_parameter_value` + on the tuples. """ defaults = [] for name, schedule in self.items(): diff --git a/qiskit_experiments/curve_analysis/__init__.py b/qiskit_experiments/curve_analysis/__init__.py index e0e3451c19..b1884eb781 100644 --- a/qiskit_experiments/curve_analysis/__init__.py +++ b/qiskit_experiments/curve_analysis/__init__.py @@ -70,7 +70,7 @@ ErrorAmplificationAnalysis Fit Functions -************* +============= .. autosummary:: :toctree: ../stubs/ @@ -86,7 +86,7 @@ fit_function.bloch_oscillation_z Initial Guess Estimators -************************ +======================== .. autosummary:: :toctree: ../stubs/ @@ -101,7 +101,7 @@ guess.oscillation_exp_decay Utilities -********* +========= .. autosummary:: :toctree: ../stubs/ @@ -109,6 +109,13 @@ utils.analysis_result_to_repr utils.convert_lmfit_result utils.eval_with_uncertainties + utils.filter_data + utils.mean_xy_data + utils.multi_mean_xy_data + utils.data_sort + utils.level2_probability + utils.probability + """ from .base_curve_analysis import BaseCurveAnalysis from .curve_analysis import CurveAnalysis diff --git a/qiskit_experiments/curve_analysis/base_curve_analysis.py b/qiskit_experiments/curve_analysis/base_curve_analysis.py index fcc90cfbd7..aeb5cdd125 100644 --- a/qiskit_experiments/curve_analysis/base_curve_analysis.py +++ b/qiskit_experiments/curve_analysis/base_curve_analysis.py @@ -46,7 +46,7 @@ class BaseCurveAnalysis(BaseAnalysis, ABC): """Abstract superclass of curve analysis base classes. - Note that this class doesn't define :meth:`_run_analysis` method, + Note that this class doesn't define the :meth:`_run_analysis` method, and no actual fitting protocol is implemented in this base class. However, this class defines several common methods that can be reused. A curve analysis subclass can construct proper fitting protocol @@ -164,9 +164,8 @@ def _default_options(cls) -> Options: Default to ``False``. average_method (str): Method to average the y values when the same x values appear multiple times. One of "sample", "iwv" (i.e. inverse weighted variance), - "shots_weighted". See - :func:`~qiskit_experiments.curve_analysis.data_processing.mean_xy_data` - for details. Default to "shots_weighted". + "shots_weighted". See :func:`.mean_xy_data` for details. Default to + "shots_weighted". p0 (Dict[str, float]): Initial guesses for the fit parameters. The dictionary is keyed on the fit parameter names. bounds (Dict[str, Tuple[float, float]]): Boundary of fit parameters. diff --git a/qiskit_experiments/curve_analysis/curve_analysis.py b/qiskit_experiments/curve_analysis/curve_analysis.py index 41f46244de..383931f1ab 100644 --- a/qiskit_experiments/curve_analysis/curve_analysis.py +++ b/qiskit_experiments/curve_analysis/curve_analysis.py @@ -26,8 +26,7 @@ from .base_curve_analysis import BaseCurveAnalysis, PARAMS_ENTRY_PREFIX from .curve_data import CurveData, FitOptions, CurveFitResult -from .data_processing import multi_mean_xy_data, data_sort -from .utils import eval_with_uncertainties, convert_lmfit_result +from .utils import eval_with_uncertainties, convert_lmfit_result, multi_mean_xy_data, data_sort class CurveAnalysis(BaseCurveAnalysis): diff --git a/qiskit_experiments/curve_analysis/curve_data.py b/qiskit_experiments/curve_analysis/curve_data.py index 08e80a0919..df81793568 100644 --- a/qiskit_experiments/curve_analysis/curve_data.py +++ b/qiskit_experiments/curve_analysis/curve_data.py @@ -29,6 +29,7 @@ @dataclasses.dataclass(frozen=True) class SeriesDef: """A dataclass to describe the definition of the curve. + Attributes: fit_func: A callable that defines the fit model of this curve. The argument names in the callable are parsed to create the fit parameter list, which will appear diff --git a/qiskit_experiments/curve_analysis/curve_fit.py b/qiskit_experiments/curve_analysis/curve_fit.py index a8c1459232..9f2ca9ecee 100644 --- a/qiskit_experiments/curve_analysis/curve_fit.py +++ b/qiskit_experiments/curve_analysis/curve_fit.py @@ -20,7 +20,7 @@ import uncertainties import scipy.optimize as opt from qiskit_experiments.exceptions import AnalysisError -from qiskit_experiments.curve_analysis.data_processing import filter_data +from qiskit_experiments.curve_analysis.utils import filter_data from qiskit_experiments.curve_analysis.curve_data import FitData from qiskit_experiments.warnings import deprecated_function diff --git a/qiskit_experiments/curve_analysis/data_processing.py b/qiskit_experiments/curve_analysis/data_processing.py deleted file mode 100644 index e191d667c9..0000000000 --- a/qiskit_experiments/curve_analysis/data_processing.py +++ /dev/null @@ -1,293 +0,0 @@ -# This code is part of Qiskit. -# -# (C) Copyright IBM 2021. -# -# This code is licensed under the Apache License, Version 2.0. You may -# obtain a copy of this license in the LICENSE.txt file in the root directory -# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0. -# -# Any modifications or derivative works of this code must retain this -# copyright notice, and modified files need to carry a notice indicating -# that they have been altered from the originals. -""" -Data processing utility functions for curve fitting experiments -""" -# pylint: disable = invalid-name - -from typing import List, Dict, Tuple, Optional, Callable -import numpy as np -from qiskit.exceptions import QiskitError - - -def filter_data(data: List[Dict[str, any]], **filters) -> List[Dict[str, any]]: - """Return the list of filtered data - - Args: - data: list of data dicts. - filters: kwargs for filtering based on metadata - values. - - Returns: - The list of filtered data. If no filters are provided this will be the - input list. - """ - if not filters: - return data - filtered_data = [] - for datum in data: - include = True - metadata = datum["metadata"] - for key, val in filters.items(): - if key not in metadata or metadata[key] != val: - include = False - break - if include: - filtered_data.append(datum) - return filtered_data - - -def mean_xy_data( - xdata: np.ndarray, - ydata: np.ndarray, - sigma: Optional[np.ndarray] = None, - shots: Optional[np.ndarray] = None, - method: str = "sample", -) -> Tuple[np.ndarray, ...]: - r"""Return (x, y_mean, sigma) data. - - The mean is taken over all ydata values with the same xdata value using - the specified method. For each x the mean :math:`\overline{y}` and variance - :math:`\sigma^2` are computed as - - * ``"sample"`` (default) *Sample mean and variance* - :math:`\overline{y} = \sum_{i=1}^N y_i / N`, - :math:`\sigma^2 = \sum_{i=1}^N ((\overline{y} - y_i)^2) / N` - * ``"iwv"`` *Inverse-weighted variance* - :math:`\overline{y} = (\sum_{i=1}^N y_i / \sigma_i^2 ) \sigma^2` - :math:`\sigma^2 = 1 / (\sum_{i=1}^N 1 / \sigma_i^2)` - * ``"shots_weighted_variance"`` *Sample mean and variance with weights from shots* - :math:`\overline{y} = \sum_{i=1}^N n_i y_i / M`, - :math:`\sigma^2 = \sum_{i=1}^N (n_i \sigma_i / M)^2`, - where :math:`n_i` is the number of shots per data point and :math:`M = \sum_{i=1}^N n_i` - is a total number of shots from different circuit execution at the same x value. - If ``shots`` is not provided, this applies uniform weights to all values. - - Args - xdata: 1D or 2D array of xdata from curve_fit_data or - multi_curve_fit_data - ydata: array of ydata returned from curve_fit_data or - multi_curve_fit_data - sigma: Optional, array of standard deviations in ydata. - shots: Optional, array of shots used to get a data point. - method: The method to use for computing y means and - standard deviations sigma (default: "sample"). - - Returns: - tuple: ``(x, y_mean, sigma, shots)``, where - ``x`` is an arrays of unique x-values, ``y`` is an array of - sample mean y-values, ``sigma`` is an array of sample standard - deviation of y values, and ``shots`` are the total number of experiment shots - used to evaluate the data point. If ``shots`` in the function call is ``None``, - the numbers appear in the returned value will represent just a number of - duplicated x value entries. - - Raises: - QiskitError: If "ivw" method is used without providing a sigma. - """ - x_means = np.unique(xdata, axis=0) - y_means = np.zeros(x_means.size) - y_sigmas = np.zeros(x_means.size) - y_shots = np.zeros(x_means.size) - - if shots is None or any(np.isnan(shots)): - # this will become standard average - shots = np.ones_like(xdata) - - # Sample mean and variance method - if method == "sample": - for i in range(x_means.size): - # Get positions of y to average - idxs = xdata == x_means[i] - ys = ydata[idxs] - ns = shots[idxs] - - # Compute sample mean and sample standard error of the mean - y_means[i] = np.mean(ys) - y_sigmas[i] = np.sqrt(np.mean((y_means[i] - ys) ** 2) / ys.size) - y_shots[i] = np.sum(ns) - - return x_means, y_means, y_sigmas, y_shots - - # Inverse-weighted variance method - if method == "iwv": - if sigma is None: - raise QiskitError( - "The inverse-weighted variance method cannot be used with" " `sigma=None`" - ) - for i in range(x_means.size): - # Get positions of y to average - idxs = xdata == x_means[i] - ys = ydata[idxs] - ns = shots[idxs] - - # Compute the inverse-variance weighted y mean and variance - weights = 1 / sigma[idxs] ** 2 - y_var = 1 / np.sum(weights) - y_means[i] = y_var * np.sum(weights * ys) - y_sigmas[i] = np.sqrt(y_var) - y_shots[i] = np.sum(ns) - - return x_means, y_means, y_sigmas, y_shots - - # Quadrature sum of variance - if method == "shots_weighted": - for i in range(x_means.size): - # Get positions of y to average - idxs = xdata == x_means[i] - ys = ydata[idxs] - ss = sigma[idxs] - ns = shots[idxs] - weights = ns / np.sum(ns) - - # Compute sample mean and sum of variance with weights based on shots - y_means[i] = np.sum(weights * ys) - y_sigmas[i] = np.sqrt(np.sum(weights**2 * ss**2)) - y_shots[i] = np.sum(ns) - - return x_means, y_means, y_sigmas, y_shots - - # Invalid method - raise QiskitError(f"Unsupported method {method}") - - -def multi_mean_xy_data( - series: np.ndarray, - xdata: np.ndarray, - ydata: np.ndarray, - sigma: Optional[np.ndarray] = None, - shots: Optional[np.ndarray] = None, - method: str = "sample", -) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray]: - """Take mean of multi series data set. - - Args: - series: Series index. - xdata: 1D or 2D array of xdata from curve_fit_data or - multi_curve_fit_data - ydata: array of ydata returned from curve_fit_data or - multi_curve_fit_data - sigma: Optional, array of standard deviations in ydata. - shots: Optional, array of shots used to get a data point. - method: The method to use for computing y means and - standard deviations sigma (default: "sample"). - - Returns: - Tuple of (series, xdata, ydata, sigma, shots) - - See also: - :func:`~.data_processing.mean_xy_data` - """ - series_vals = np.unique(series) - - series_means = [] - xdata_means = [] - ydata_means = [] - sigma_means = [] - shots_sums = [] - - # Get x, y, sigma data for series and process mean data - for series_val in series_vals: - idxs = series == series_val - sigma_i = sigma[idxs] if sigma is not None else None - shots_i = shots[idxs] if shots is not None else None - - x_mean, y_mean, sigma_mean, shots_sum = mean_xy_data( - xdata[idxs], ydata[idxs], sigma=sigma_i, shots=shots_i, method=method - ) - series_means.append(np.full(x_mean.size, series_val, dtype=int)) - xdata_means.append(x_mean) - ydata_means.append(y_mean) - sigma_means.append(sigma_mean) - shots_sums.append(shots_sum) - - # Concatenate lists - return ( - np.concatenate(series_means), - np.concatenate(xdata_means), - np.concatenate(ydata_means), - np.concatenate(sigma_means), - np.concatenate(shots_sums), - ) - - -def data_sort( - series: np.ndarray, - xdata: np.ndarray, - ydata: np.ndarray, - sigma: Optional[np.ndarray] = None, - shots: Optional[np.ndarray] = None, -) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray]: - """Sort data. - - Input x values may not be lined up in order, since experiment may accept user input array, - or data may be concatenated with previous scan. This sometimes confuses the algorithmic - generation of initial guesses especially when guess depends on derivative. - - This returns data set that is sorted by xdata and series in ascending order. - - Args: - series: Series index. - xdata: 1D or 2D array of xdata from curve_fit_data or - multi_curve_fit_data - ydata: array of ydata returned from curve_fit_data or - multi_curve_fit_data - sigma: Optional, array of standard deviations in ydata. - shots: Optional, array of shots used to get a data point. - - Returns: - Tuple of (series, xdata, ydata, sigma, shots) sorted in ascending order of xdata and series. - """ - if sigma is None: - sigma = np.full(series.size, np.nan, dtype=float) - - if shots is None: - shots = np.full(series.size, np.nan, dtype=float) - - sorted_data = sorted(zip(series, xdata, ydata, sigma, shots), key=lambda d: (d[0], d[1])) - - return np.asarray(sorted_data).T - - -def level2_probability(data: Dict[str, any], outcome: str) -> Tuple[float, float]: - """Return the outcome probability mean and variance. - - Args: - data: A data dict containing count data. - outcome: bitstring for desired outcome probability. - - Returns: - tuple: (p_mean, p_var) of the probability mean and variance - estimated from the counts. - - .. note:: - - This assumes a binomial distribution where :math:`K` counts - of the desired outcome from :math:`N` shots the - mean probability is :math:`p = K / N` and the variance is - :math:`\\sigma^2 = p (1-p) / N`. - """ - counts = data["counts"] - - shots = sum(counts.values()) - p_mean = counts.get(outcome, 0.0) / shots - p_var = p_mean * (1 - p_mean) / shots - return p_mean, p_var - - -def probability(outcome: str) -> Callable: - """Return probability data processor callback used by the analysis classes.""" - - def data_processor(data): - return level2_probability(data, outcome) - - return data_processor diff --git a/qiskit_experiments/curve_analysis/utils.py b/qiskit_experiments/curve_analysis/utils.py index 9802bc08fd..b68d1a45ce 100644 --- a/qiskit_experiments/curve_analysis/utils.py +++ b/qiskit_experiments/curve_analysis/utils.py @@ -12,7 +12,7 @@ """Utils in curve analysis.""" -from typing import Union, Optional, List, Dict +from typing import Union, Optional, List, Dict, Tuple, Callable import time import asteval @@ -23,7 +23,7 @@ from uncertainties import unumpy from qiskit_experiments.curve_analysis.curve_data import CurveFitResult -from qiskit_experiments.exceptions import AnalysisError +from qiskit_experiments.exceptions import AnalysisError, QiskitError from qiskit_experiments.framework import AnalysisResultData @@ -220,3 +220,280 @@ def eval_with_uncertainties( wrapfunc = np.vectorize(wrap_function(model.func)) return wrapfunc(x=x, **sub_params) + + +def filter_data(data: List[Dict[str, any]], **filters) -> List[Dict[str, any]]: + """Return the list of filtered data + + Args: + data: list of data dicts. + filters: kwargs for filtering based on metadata + values. + + Returns: + The list of filtered data. If no filters are provided this will be the + input list. + """ + if not filters: + return data + filtered_data = [] + for datum in data: + include = True + metadata = datum["metadata"] + for key, val in filters.items(): + if key not in metadata or metadata[key] != val: + include = False + break + if include: + filtered_data.append(datum) + return filtered_data + + +def mean_xy_data( + xdata: np.ndarray, + ydata: np.ndarray, + sigma: Optional[np.ndarray] = None, + shots: Optional[np.ndarray] = None, + method: str = "sample", +) -> Tuple[np.ndarray, ...]: + r"""Return (x, y_mean, sigma) data. + + The mean is taken over all :math:`y` data values with the same :math:`x` data value using + the specified method. For each :math:`x` the mean :math:`\overline{y}` and variance + :math:`\sigma^2` are computed as + + * ``"sample"`` (default): *Sample mean and variance* + + * :math:`\overline{y} = \sum_{i=1}^N y_i / N`, + + * :math:`\sigma^2 = \sum_{i=1}^N ((\overline{y} - y_i)^2) / N` + + * ``"iwv"``: *Inverse-weighted variance* + + * :math:`\overline{y} = (\sum_{i=1}^N y_i / \sigma_i^2 ) \sigma^2` + * :math:`\sigma^2 = 1 / (\sum_{i=1}^N 1 / \sigma_i^2)` + + * ``"shots_weighted_variance"``: *Sample mean and variance with weights from shots* + + * :math:`\overline{y} = \sum_{i=1}^N n_i y_i / M`, + + * :math:`\sigma^2 = \sum_{i=1}^N (n_i \sigma_i / M)^2`, + where :math:`n_i` is the number of shots per data point and :math:`M = \sum_{i=1}^N n_i` + is a total number of shots from different circuit execution at the same :math:`x` value. + If ``shots`` is not provided, this applies uniform weights to all values. + + Args: + xdata: 1D or 2D array of xdata from curve_fit_data or + multi_curve_fit_data + ydata: array of ydata returned from curve_fit_data or + multi_curve_fit_data + sigma: Optional, array of standard deviations in ydata. + shots: Optional, array of shots used to get a data point. + method: The method to use for computing y means and + standard deviations sigma (default: "sample"). + + Returns: + tuple: ``(x, y_mean, sigma, shots)``, where ``x`` is an arrays of unique + x-values, ``y`` is an array of sample mean y-values, ``sigma`` is an array of + sample standard deviation of y values, and ``shots`` are the total number of + experiment shots used to evaluate the data point. If ``shots`` in the function + call is ``None``, the numbers appear in the returned value will represent just a + number of duplicated x value entries. + + Raises: + QiskitError: If the "ivw" method is used without providing a sigma. + """ + x_means = np.unique(xdata, axis=0) + y_means = np.zeros(x_means.size) + y_sigmas = np.zeros(x_means.size) + y_shots = np.zeros(x_means.size) + + if shots is None or any(np.isnan(shots)): + # this will become standard average + shots = np.ones_like(xdata) + + # Sample mean and variance method + if method == "sample": + for i in range(x_means.size): + # Get positions of y to average + idxs = xdata == x_means[i] + ys = ydata[idxs] + ns = shots[idxs] + + # Compute sample mean and sample standard error of the mean + y_means[i] = np.mean(ys) + y_sigmas[i] = np.sqrt(np.mean((y_means[i] - ys) ** 2) / ys.size) + y_shots[i] = np.sum(ns) + + return x_means, y_means, y_sigmas, y_shots + + # Inverse-weighted variance method + if method == "iwv": + if sigma is None: + raise QiskitError( + "The inverse-weighted variance method cannot be used with" " `sigma=None`" + ) + for i in range(x_means.size): + # Get positions of y to average + idxs = xdata == x_means[i] + ys = ydata[idxs] + ns = shots[idxs] + + # Compute the inverse-variance weighted y mean and variance + weights = 1 / sigma[idxs] ** 2 + y_var = 1 / np.sum(weights) + y_means[i] = y_var * np.sum(weights * ys) + y_sigmas[i] = np.sqrt(y_var) + y_shots[i] = np.sum(ns) + + return x_means, y_means, y_sigmas, y_shots + + # Quadrature sum of variance + if method == "shots_weighted": + for i in range(x_means.size): + # Get positions of y to average + idxs = xdata == x_means[i] + ys = ydata[idxs] + ss = sigma[idxs] + ns = shots[idxs] + weights = ns / np.sum(ns) + + # Compute sample mean and sum of variance with weights based on shots + y_means[i] = np.sum(weights * ys) + y_sigmas[i] = np.sqrt(np.sum(weights**2 * ss**2)) + y_shots[i] = np.sum(ns) + + return x_means, y_means, y_sigmas, y_shots + + # Invalid method + raise QiskitError(f"Unsupported method {method}") + + +def multi_mean_xy_data( + series: np.ndarray, + xdata: np.ndarray, + ydata: np.ndarray, + sigma: Optional[np.ndarray] = None, + shots: Optional[np.ndarray] = None, + method: str = "sample", +) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray]: + """Take mean of multi series data set. See :func:`.mean_xy_data`. + + Args: + series: Series index. + xdata: 1D or 2D array of xdata from curve_fit_data or + multi_curve_fit_data + ydata: array of ydata returned from curve_fit_data or + multi_curve_fit_data + sigma: Optional, array of standard deviations in ydata. + shots: Optional, array of shots used to get a data point. + method: The method to use for computing y means and + standard deviations sigma (default: "sample"). + + Returns: + Tuple of ``(series, xdata, ydata, sigma, shots)``. + + """ + series_vals = np.unique(series) + + series_means = [] + xdata_means = [] + ydata_means = [] + sigma_means = [] + shots_sums = [] + + # Get x, y, sigma data for series and process mean data + for series_val in series_vals: + idxs = series == series_val + sigma_i = sigma[idxs] if sigma is not None else None + shots_i = shots[idxs] if shots is not None else None + + x_mean, y_mean, sigma_mean, shots_sum = mean_xy_data( + xdata[idxs], ydata[idxs], sigma=sigma_i, shots=shots_i, method=method + ) + series_means.append(np.full(x_mean.size, series_val, dtype=int)) + xdata_means.append(x_mean) + ydata_means.append(y_mean) + sigma_means.append(sigma_mean) + shots_sums.append(shots_sum) + + # Concatenate lists + return ( + np.concatenate(series_means), + np.concatenate(xdata_means), + np.concatenate(ydata_means), + np.concatenate(sigma_means), + np.concatenate(shots_sums), + ) + + +def data_sort( + series: np.ndarray, + xdata: np.ndarray, + ydata: np.ndarray, + sigma: Optional[np.ndarray] = None, + shots: Optional[np.ndarray] = None, +) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, np.ndarray]: + """Sort data. + + Input x values may not be lined up in order, since experiment may accept user input array, + or data may be concatenated with previous scan. This sometimes confuses the algorithmic + generation of initial guesses especially when guess depends on derivative. + + This returns data set that is sorted by xdata and series in ascending order. + + Args: + series: Series index. + xdata: 1D or 2D array of xdata. + ydata: Array of ydata. + sigma: Optional, array of standard deviations in ydata. + shots: Optional, array of shots used to get a data point. + + Returns: + Tuple of (series, xdata, ydata, sigma, shots) sorted in ascending order of xdata + and series. + """ + if sigma is None: + sigma = np.full(series.size, np.nan, dtype=float) + + if shots is None: + shots = np.full(series.size, np.nan, dtype=float) + + sorted_data = sorted(zip(series, xdata, ydata, sigma, shots), key=lambda d: (d[0], d[1])) + + return np.asarray(sorted_data).T + + +def level2_probability(data: Dict[str, any], outcome: str) -> Tuple[float, float]: + """Return the outcome probability mean and variance. + + Args: + data: A data dict containing count data. + outcome: bitstring for desired outcome probability. + + Returns: + tuple: (p_mean, p_var) of the probability mean and variance + estimated from the counts. + + .. note:: + + This assumes a binomial distribution where :math:`K` counts + of the desired outcome from :math:`N` shots the + mean probability is :math:`p = K / N` and the variance is + :math:`\\sigma^2 = p (1-p) / N`. + """ + counts = data["counts"] + + shots = sum(counts.values()) + p_mean = counts.get(outcome, 0.0) / shots + p_var = p_mean * (1 - p_mean) / shots + return p_mean, p_var + + +def probability(outcome: str) -> Callable: + """Return probability data processor callback used by the analysis classes.""" + + def data_processor(data): + return level2_probability(data, outcome) + + return data_processor diff --git a/qiskit_experiments/data_processing/__init__.py b/qiskit_experiments/data_processing/__init__.py index 40b7e99083..56a05ddfa3 100644 --- a/qiskit_experiments/data_processing/__init__.py +++ b/qiskit_experiments/data_processing/__init__.py @@ -68,6 +68,7 @@ AverageData BasisExpectationValue MinMaxNormalize + ShotOrder RestlessNode RestlessToCounts RestlessToIQ @@ -78,6 +79,7 @@ .. autosummary:: :toctree: ../stubs/ + BaseDiscriminator SkLDA SkQDA """ @@ -94,10 +96,12 @@ AverageData, BasisExpectationValue, MinMaxNormalize, + ShotOrder, RestlessNode, RestlessToCounts, RestlessToIQ, ) from .data_processor import DataProcessor +from .discriminator import BaseDiscriminator from .sklearn_discriminators import SkLDA, SkQDA diff --git a/qiskit_experiments/data_processing/discriminator.py b/qiskit_experiments/data_processing/discriminator.py index e988d3ed35..c8caf2384f 100644 --- a/qiskit_experiments/data_processing/discriminator.py +++ b/qiskit_experiments/data_processing/discriminator.py @@ -17,14 +17,14 @@ class BaseDiscriminator: - """An abstract base class for serializable discriminators. - - ``BaseDiscriminator``s are used in the :class:`.Discriminator` data action nodes. + """An abstract base class for serializable discriminators used in the + :class:`.DiscriminatorNode` data action nodes. This class allows developers to implement their own discriminators or wrap discriminators from external libraries which therefore ensures that the discriminator fits in the data processing chain. This class defines an interface for discriminator objects. Subclasses must implement the following methods: + - :meth:`predict`: called in the :class:`.Discriminator` data-action class to predict labels from the input level-one data. - :meth:`config`: produces the config file to serialize and deserialize the discriminator. diff --git a/qiskit_experiments/data_processing/nodes.py b/qiskit_experiments/data_processing/nodes.py index 02ac0b9d66..bb31a42dfc 100644 --- a/qiskit_experiments/data_processing/nodes.py +++ b/qiskit_experiments/data_processing/nodes.py @@ -434,11 +434,12 @@ def _process(self, data: np.array) -> np.array: class DiscriminatorNode(DataAction): """A class to discriminate kerneled data, e.g., IQ data, to produce counts. - This node integrates into the data processing chain a serializable :class:`.BaseDiscriminator` - subclass instance which must have a :meth:`predict` method that takes as input a list of lists - and returns a list of labels. Crucially, this node can be initialized with a single - discriminator which applies to each memory slot or it can be initialized with a list of - discriminators, i.e., one for each slot. + This node integrates into the data processing chain a serializable + :class:`.BaseDiscriminator` subclass instance which must have a + :meth:`~.BaseDiscriminator.predict` method that takes as input a list of lists and + returns a list of labels. Crucially, this node can be initialized with a single + discriminator which applies to each memory slot or it can be initialized with a list + of discriminators, i.e., one for each slot. .. note:: @@ -551,7 +552,7 @@ def _process(self, data: np.ndarray) -> np.ndarray: class MemoryToCounts(DataAction): """A data action that takes discriminated data and transforms it into a counts dict. - This node is intended to be used after the :class:`.Discriminator` node. It will convert + This node is intended to be used after the :class:`.DiscriminatorNode` node. It will convert the classified memory into a list of count dictionaries wrapped in a numpy array. """ @@ -851,11 +852,13 @@ class ShotOrder(Enum): Generally, there are two possible modes in which a backend measures m circuits with n shots: - - In the "circuit_first" mode, the backend subsequently first measures - all m circuits and then repeats this n times. - - In the "shot_first" mode, the backend first measures the 1st circuit - n times, then the 2nd circuit n times, and it proceeds with the remaining - circuits in the same way until it measures the m-th circuit n times. + + - In the "circuit_first" mode, the backend subsequently first measures + all m circuits and then repeats this n times. + + - In the "shot_first" mode, the backend first measures the 1st circuit + n times, then the 2nd circuit n times, and it proceeds with the remaining + circuits in the same way until it measures the m-th circuit n times. The current default mode of IBM Quantum devices is "circuit_first". """ @@ -871,7 +874,7 @@ class RestlessNode(DataAction, ABC): In restless measurements, the qubit is not reset after each measurement. Instead, the outcome of the previous quantum non-demolition measurement is the initial state for the current circuit. Restless measurements therefore require special data processing nodes - that are implemented as sub-classes of `RestlessNode`. Restless experiments provide a + that are implemented as sub-classes of ``RestlessNode``. Restless experiments provide a fast alternative for several calibration and characterization tasks, for details see https://arxiv.org/pdf/2202.06981.pdf. diff --git a/qiskit_experiments/database_service/__init__.py b/qiskit_experiments/database_service/__init__.py index 1d454cf97e..24d19f2840 100644 --- a/qiskit_experiments/database_service/__init__.py +++ b/qiskit_experiments/database_service/__init__.py @@ -18,7 +18,7 @@ .. currentmodule:: qiskit_experiments.database_service This subpackage provides database-specific utility functions and exceptions which -are used with the :class:`ExperimentData` and :class:`AnalysisResult` classes. +are used with the :class:`.ExperimentData` and :class:`.AnalysisResult` classes. Exceptions diff --git a/qiskit_experiments/framework/__init__.py b/qiskit_experiments/framework/__init__.py index e25b1db496..05f047b9a9 100644 --- a/qiskit_experiments/framework/__init__.py +++ b/qiskit_experiments/framework/__init__.py @@ -57,7 +57,7 @@ The experiment class contains information for generating circuits and analysis of results. These can typically be configured with a variety of options. -Once all options are set, you can call :meth:`BaseExperiment.run` method to run +Once all options are set, you can call the :meth:`.BaseExperiment.run` method to run the experiment on a Qiskit compatible ``backend``. The steps of running an experiment involves generation experimental circuits @@ -68,9 +68,9 @@ The result of running an experiment is an :class:`ExperimentData` container which contains the analysis results, any figures generated during analysis, and the raw measurement data. These can each be accessed using the -:meth:`ExperimentData.analysis_results`, :meth:`ExperimentData.figure` -and :meth:`ExperimentData.data` methods respectively. Additional metadata -for the experiment itself can be added via :meth:`ExperimentData.metadata`. +:meth:`.ExperimentData.analysis_results`, :meth:`.ExperimentData.figure` +and :meth:`.ExperimentData.data` methods respectively. Additional metadata +for the experiment itself can be added via :meth:`.ExperimentData.metadata`. Classes diff --git a/qiskit_experiments/framework/analysis_result.py b/qiskit_experiments/framework/analysis_result.py index 7bae2fb937..1b532f3234 100644 --- a/qiskit_experiments/framework/analysis_result.py +++ b/qiskit_experiments/framework/analysis_result.py @@ -41,33 +41,33 @@ class AnalysisResult: """Class representing an analysis result for an experiment. - Analysis results can also be stored in a database. + Analysis results can also be stored using the experiments service. - The field `db_data` is a dataclass (`ExperimentDataclass`) containing - all the data that can be stored in the database and loaded from it, and + The field ``db_data`` is a dataclass (`ExperimentDataclass`) containing + all the data that can be stored with the service and loaded from it, and as such is subject to strict conventions. Other data fields can be added and used freely, but they won't be saved to the database. - Note that the `result_data` field of the dataclass is by itself a dictioary + Note that the ``result_data`` field of the dataclass is by itself a dictionary capable of holding arbitrary values (in a dictionary indexed by a string). - The data fields in the `db_data` dataclass are: + The data fields in the ``db_data`` dataclass are: - * `experiment_id`: `str` - * `result_id`: `str` - * `result_type`: `str` - * `device_components`: `list` of `str` - * `quality`: `str` - * `verified`: `bool` - * `tags`: `list` of `str` - * `backend_name`: `str` - * `chisq`: `float` - * `result_data`: `dict` with `str` keys and unrestricted values + * ``experiment_id``: ``str`` + * ``result_id``: ``str`` + * ``result_type``: ``str`` + * ``device_components``: ``List[str]`` + * ``quality``: ``str`` + * ``verified``: ``bool`` + * ``tags``: ``List[str]`` + * ``backend_name``: ``str`` + * ``chisq``: ``float`` + * ``result_data``: ``Dict[str]`` Analysis data that does not fit into the other fields should be added to - the `result_data` dict, e.g. curve parameters in experiments doing a curve fit. + the ``result_data`` dict, e.g. curve parameters in experiments doing a curve fit. """ version = 1 @@ -114,10 +114,10 @@ def __init__( verified: Whether the result quality has been verified. tags: Tags for this analysis result. service: Experiment service to be used to store result in database. - source: Class and qiskit version information when loading from an + source: Class and Qiskit version information when loading from an experiment service. Returns: - The Analysis result object + The AnalysisResult object. """ # Data to be stored in DB. self._db_data = AnalysisResultData( diff --git a/qiskit_experiments/framework/base_experiment.py b/qiskit_experiments/framework/base_experiment.py index f261ff0066..01fe623249 100644 --- a/qiskit_experiments/framework/base_experiment.py +++ b/qiskit_experiments/framework/base_experiment.py @@ -298,12 +298,12 @@ def circuits(self) -> List[QuantumCircuit]: """Return a list of experiment circuits. Returns: - A list of :class:`QuantumCircuit`. + A list of :class:`~qiskit.circuit.QuantumCircuit`. .. note:: These circuits should be on qubits ``[0, .., N-1]`` for an *N*-qubit experiment. The circuits mapped to physical qubits - are obtained via the :meth:`transpiled_circuits` method. + are obtained via the internal :meth:`_transpiled_circuits` method. """ # NOTE: Subclasses should override this method using the `options` # values for any explicit experiment options that affect circuit diff --git a/qiskit_experiments/framework/composite/composite_analysis.py b/qiskit_experiments/framework/composite/composite_analysis.py index 2860449b11..e07e606712 100644 --- a/qiskit_experiments/framework/composite/composite_analysis.py +++ b/qiskit_experiments/framework/composite/composite_analysis.py @@ -40,15 +40,15 @@ class CompositeAnalysis(BaseAnalysis): .. note:: - If the composite :class:`ExperimentData` does not already contain - child experiment data containers for the component experiments - they will be initialized and added to the experiment data when :meth:`run` - is called on the composite data. - - When calling :meth:`run` on experiment data already containing - initialized component experiment data, any previously stored - circuit data will be cleared and replaced with the marginalized data - from the composite experiment data. + If the composite :class:`ExperimentData` does not already contain child + experiment data containers for the component experiments they will be + initialized and added to the experiment data when + :meth:`~.CompositeAnalysis.run` is called on the composite data. + + When calling :meth:`~.CompositeAnalysis.run` on experiment data already + containing initialized component experiment data, any previously stored circuit + data will be cleared and replaced with the marginalized data from the composite + experiment data. """ def __init__(self, analyses: List[BaseAnalysis], flatten_results: bool = False): diff --git a/qiskit_experiments/framework/experiment_data.py b/qiskit_experiments/framework/experiment_data.py index 1ecc4a471d..fc42b1945b 100644 --- a/qiskit_experiments/framework/experiment_data.py +++ b/qiskit_experiments/framework/experiment_data.py @@ -148,7 +148,7 @@ class ExperimentData: This class handles the following: 1. Storing the data related to an experiment: raw data, metadata, analysis results, - and figures + and figures 2. Managing jobs and adding data from jobs automatically 3. Saving and loading data from the database service @@ -186,17 +186,17 @@ def __init__( """Initialize experiment data. Args: - experiment: Optional, experiment object that generated the data. - backend: Optional, Backend the experiment runs on; overrides the - backend in the experiment object + experiment: Experiment object that generated the data. + backend: Backend the experiment runs on. This overrides the + backend in the experiment object. service: The service that stores the experiment results to the database - parent_id: Optional, ID of the parent experiment data + parent_id: ID of the parent experiment data in the setting of a composite experiment - job_ids: Optional, IDs of jobs submitted for the experiment. - child_data: Optional, list of child experiment data. - verbose: Optional, whether to print messages - db_data: Optional, a prepared ExperimentDataclass of the experiment info; - overrides other db parameters. + job_ids: IDs of jobs submitted for the experiment. + child_data: List of child experiment data. + verbose: Whether to print messages. + db_data: A prepared ExperimentDataclass of the experiment info. + This overrides other db parameters. """ if experiment is not None: backend = backend or experiment.backend @@ -648,7 +648,8 @@ def add_data( """Add experiment data. Args: - data: Experiment data to add. Several types are accepted for convenience + data: Experiment data to add. Several types are accepted for convenience: + * Result: Add data from this ``Result`` object. * List[Result]: Add data from the ``Result`` objects. * Dict: Add this data. @@ -1709,21 +1710,20 @@ def status(self) -> ExperimentStatus: def job_status(self) -> JobStatus: """Return the experiment job execution status. - Possible return values for :class:`.JobStatus` are + Possible return values for :class:`qiskit.providers.jobstatus.JobStatus` are - * :attr:`~.JobStatus.ERROR` - if any job incurred an error - * :attr:`~.JobStatus.CANCELLED` - if any job is cancelled. - * :attr:`~.JobStatus.RUNNING` - if any job is still running. - * :attr:`~.JobStatus.QUEUED` - if any job is queued. - * :attr:`~.JobStatus.VALIDATING` - if any job is being validated. - * :attr:`~.JobStatus.INITIALIZING` - if any job is being initialized. - * :attr:`~.JobStatus.DONE` - if all jobs are finished. + * ``ERROR`` - if any job incurred an error + * ``CANCELLED`` - if any job is cancelled. + * ``RUNNING`` - if any job is still running. + * ``QUEUED`` - if any job is queued. + * ``VALIDATING`` - if any job is being validated. + * ``INITIALIZING`` - if any job is being initialized. + * ``DONE`` - if all jobs are finished. .. note:: - If an experiment has status :attr:`~.JobStatus.ERROR` or - :attr:`~.JobStatus.CANCELLED` there may still be pending or - running jobs. In these cases it may be beneficial to call + If an experiment has status ``ERROR`` or ``CANCELLED`` there may still be + pending or running jobs. In these cases it may be beneficial to call :meth:`cancel_jobs` to terminate these remaining jobs. Returns: diff --git a/qiskit_experiments/library/__init__.py b/qiskit_experiments/library/__init__.py index 5720238b85..4a4126f82b 100644 --- a/qiskit_experiments/library/__init__.py +++ b/qiskit_experiments/library/__init__.py @@ -36,6 +36,7 @@ ~randomized_benchmarking.StandardRB ~randomized_benchmarking.InterleavedRB + ~tomography.TomographyExperiment ~tomography.StateTomography ~tomography.ProcessTomography ~tomography.MitigatedStateTomography @@ -64,7 +65,6 @@ ~characterization.FineAmplitude ~characterization.FineXAmplitude ~characterization.FineSXAmplitude - ~characterization.FineZXAmplitude ~characterization.Rabi ~characterization.EFRabi ~characterization.RamseyXY @@ -91,6 +91,7 @@ ~characterization.CrossResonanceHamiltonian ~characterization.EchoedCrossResonanceHamiltonian ~characterization.ZZRamsey + ~characterization.FineZXAmplitude .. _characterization-mitigation: @@ -184,6 +185,7 @@ class instance to manage parameters and pulse schedules. ) from .randomized_benchmarking import StandardRB, InterleavedRB from .tomography import ( + TomographyExperiment, StateTomography, ProcessTomography, MitigatedStateTomography, diff --git a/qiskit_experiments/library/characterization/multi_state_discrimination.py b/qiskit_experiments/library/characterization/multi_state_discrimination.py index 9950e92914..f4e2689f7b 100644 --- a/qiskit_experiments/library/characterization/multi_state_discrimination.py +++ b/qiskit_experiments/library/characterization/multi_state_discrimination.py @@ -55,8 +55,8 @@ class MultiStateDiscrimination(BaseExperiment): :class:`MultiStateDiscriminationAnalysis` # section: reference - `Qiskit Textbook `_. + `Qiskit Textbook\ + `_ """ diff --git a/qiskit_experiments/library/randomized_benchmarking/interleaved_rb_analysis.py b/qiskit_experiments/library/randomized_benchmarking/interleaved_rb_analysis.py index bf7d29592a..266ea0703b 100644 --- a/qiskit_experiments/library/randomized_benchmarking/interleaved_rb_analysis.py +++ b/qiskit_experiments/library/randomized_benchmarking/interleaved_rb_analysis.py @@ -77,12 +77,12 @@ class InterleavedRBAnalysis(curve.CurveAnalysis): bounds: [0, 1] defpar \alpha: desc: Depolarizing parameter. - init_guess: Determined by :func:`~rb_decay` with standard RB curve. + init_guess: Determined by :meth:`.rb_decay` with standard RB curve. bounds: [0, 1] defpar \alpha_c: - desc: Ratio of the depolarizing parameter of interleaved RB to standard RB curve. + desc: Ratio of the depolarizing parameter of interleaved RB to standard RB. init_guess: Determined by alpha of interleaved RB curve divided by one of - standard RB curve. Both alpha values are estimated by :func:`~rb_decay`. + standard RB curve. Both alpha values are estimated by :meth:`.rb_decay`. bounds: [0, 1] # section: reference diff --git a/qiskit_experiments/library/randomized_benchmarking/rb_analysis.py b/qiskit_experiments/library/randomized_benchmarking/rb_analysis.py index f3a69c8fe7..b1a33fbe50 100644 --- a/qiskit_experiments/library/randomized_benchmarking/rb_analysis.py +++ b/qiskit_experiments/library/randomized_benchmarking/rb_analysis.py @@ -60,7 +60,7 @@ class RBAnalysis(curve.CurveAnalysis): bounds: [0, 1] defpar \alpha: desc: Depolarizing parameter. - init_guess: Determined by :func:`~rb_decay`. + init_guess: Determined by :func:`~.guess.rb_decay`. bounds: [0, 1] # section: reference diff --git a/qiskit_experiments/library/tomography/__init__.py b/qiskit_experiments/library/tomography/__init__.py index c9ebeb058b..b068970892 100644 --- a/qiskit_experiments/library/tomography/__init__.py +++ b/qiskit_experiments/library/tomography/__init__.py @@ -24,6 +24,7 @@ :toctree: ../stubs/ :template: autosummary/experiment.rst + TomographyExperiment StateTomography ProcessTomography MitigatedStateTomography @@ -37,6 +38,7 @@ :toctree: ../stubs/ :template: autosummary/analysis.rst + TomographyAnalysis StateTomographyAnalysis ProcessTomographyAnalysis MitigatedTomographyAnalysis @@ -90,10 +92,12 @@ """ # Experiment Classes +from .tomography_experiment import TomographyExperiment from .qst_experiment import StateTomography, StateTomographyAnalysis from .qpt_experiment import ProcessTomography, ProcessTomographyAnalysis from .mit_qst_experiment import MitigatedStateTomography from .mit_qpt_experiment import MitigatedProcessTomography +from .tomography_analysis import TomographyAnalysis from .mit_tomography_analysis import MitigatedTomographyAnalysis # Basis Classes diff --git a/qiskit_experiments/library/tomography/basis/base_basis.py b/qiskit_experiments/library/tomography/basis/base_basis.py index 0717d3db7d..c116b713b2 100644 --- a/qiskit_experiments/library/tomography/basis/base_basis.py +++ b/qiskit_experiments/library/tomography/basis/base_basis.py @@ -83,7 +83,7 @@ class PreparationBasis(BaseBasis): define a preparation basis: * The :meth:`circuit` method which returns the logical preparation - :class:`.QuantumCircuit` for basis element index on the specified + :class:`~qiskit.circuit.QuantumCircuit` for basis element index on the specified qubits. This circuit should be a logical circuit on the specified number of qubits and will be remapped to the corresponding physical qubits during transpilation. @@ -94,7 +94,7 @@ class PreparationBasis(BaseBasis): * The :meth:`index_shape` method which returns the shape of allowed basis indices for the specified qubits, and their values. - * The :meth:`matrix_shape` method which returns the shape of subsystem + * The :meth:`~.PreparationBasis.matrix_shape` method which returns the shape of subsystem dimensions of the density matrix state on the specified qubits. """ @@ -135,22 +135,22 @@ class MeasurementBasis(BaseBasis): define a preparation basis: * The :meth:`circuit` method which returns the logical measurement - :class:`.QuantumCircuit` for basis element index on the specified + :class:`~qiskit.circuit.QuantumCircuit` for basis element index on the specified physical qubits. This circuit should be a logical circuit on the specified number of qubits and will be remapped to the corresponding physical qubits during transpilation. It should include classical bits and the measure instructions for the basis measurement storing the outcome value in these bits. - * The :meth:`matrix` method which returns the POVM element corresponding - to the basis element index and measurement outcome on the specified - qubits. This should return either a :class:`.Statevector` for a PVM - element, or :class:`.DensityMatrix` for a general POVM element. + * The :meth:`matrix` method which returns the POVM element corresponding to the + basis element index and measurement outcome on the specified qubits. This should + return either a :class:`~qiskit.quantum_info.Statevector` for a PVM element, or + :class:`~qiskit.quantum_info.DensityMatrix` for a general POVM element. * The :meth:`index_shape` method which returns the shape of allowed basis indices for the specified qubits, and their values. - * The :meth:`matrix_shape` method which returns the shape of subsystem + * The :meth:`~.PreparationBasis.matrix_shape` method which returns the shape of subsystem dimensions of the POVM element matrices on the specified qubits. * The :meth:`outcome_shape` method which returns the shape of allowed diff --git a/qiskit_experiments/test/__init__.py b/qiskit_experiments/test/__init__.py index 651bd96e45..e63b8517f5 100644 --- a/qiskit_experiments/test/__init__.py +++ b/qiskit_experiments/test/__init__.py @@ -18,15 +18,8 @@ .. currentmodule:: qiskit_experiments.test This module contains classes and functions that are used to enable testing -of Qiskit Experiments. It's primarily composed of fake and mock backends that -act like a normal :class:`~qiskit.providers.BackendV1` for a real device but -instead call a simulator internally. - -.. autosummary:: - :toctree: ../stubs/ - - FakeJob - FakeService +of Qiskit Experiments. It's primarily composed of mock backends that +simulate real backends. .. _backends: @@ -42,11 +35,26 @@ MockIQParallelBackend T2HahnBackend NoisyDelayAerBackend + PulseBackend + SingleTransmonTestBackend + +Helpers +======= + +Helper classes for supporting test functionality. + +.. autosummary:: + :toctree: ../stubs/ + + MockIQExperimentHelper + MockIQParallelExperimentHelper """ from .utils import FakeJob from .mock_iq_backend import MockIQBackend, MockIQParallelBackend +from .mock_iq_helpers import MockIQExperimentHelper, MockIQParallelExperimentHelper from .noisy_delay_aer_simulator import NoisyDelayAerBackend from .t2hahn_backend import T2HahnBackend from .fake_service import FakeService +from .pulse_backend import PulseBackend, SingleTransmonTestBackend diff --git a/qiskit_experiments/test/mock_iq_helpers.py b/qiskit_experiments/test/mock_iq_helpers.py index 8b626aaf50..b1f83a76c1 100644 --- a/qiskit_experiments/test/mock_iq_helpers.py +++ b/qiskit_experiments/test/mock_iq_helpers.py @@ -26,7 +26,7 @@ class MockIQExperimentHelper: - """Abstract class for the MockIQ helper classes + """Abstract class for the MockIQ helper classes. Different tests will use experiment specific helper classes which define the pattern of the IQ data that is then analyzed. @@ -39,13 +39,14 @@ def __init__( ): """Create a MockIQBackend helper object to define how the backend functions. - `iq_cluster_centers` and `iq_cluster_width` define the base IQ cluster centers and - standard-deviations for each qubit in a :class:`MockIQBackend` instance. These are used by - :meth:`iq_clusters` by default. Subclasses can override :meth:`iq_clusters` to return a - modified version of attr:`iq_cluster_centers` and attr:`iq_cluster_width`. - `iq_cluster_centers` is a list of tuples. For a given qubit `i_qbt` and computational state - `i_state` (either `0` or `1`), the centers of the IQ clusters are found by indexing - `iq_cluster_centers` as follows: + :attr:`iq_cluster_centers` and :attr:`iq_cluster_width` define the base IQ + cluster centers and standard deviations for each qubit in a + :class:`MockIQBackend` instance. These are used by :meth:`iq_clusters` by + default. Subclasses can override :meth:`iq_clusters` to return a modified + version of :attr:`iq_cluster_centers` and :attr:`iq_cluster_width`. + `iq_cluster_centers` is a list of tuples. For a given qubit ``i_qbt`` and + computational state ``i_state`` (either `0` or `1`), the centers of the IQ + clusters are found by indexing ``iq_cluster_centers`` as follows: .. code-block:: python @@ -53,8 +54,8 @@ def __init__( center_inphase = iq_center[0] center_quadrature = iq_center[1] - `iq_cluster_width` is indexed similarly except that there is only one width per qubit: i.e., the - standard-deviation of the IQ cluster for qubit `i_qbt` is + :attr:`iq_cluster_width` is indexed similarly except that there is only one width + per qubit: i.e., the standard deviation of the IQ cluster for qubit ``i_qbt`` is .. code-block:: python @@ -109,61 +110,68 @@ def compute_probabilities(self, circuits: List[QuantumCircuit]) -> List[Dict[str Examples: - **1 qubit circuit - excited state** + **1 qubit circuit - excited state** - In this experiment, we want to bring a qubit to its excited state and measure it. - The circuit: - ┌───┐┌─┐ - q: ┤ X ├┤M├ - └───┘└╥┘ - c: 1/══════╩═ - 0 + In this experiment, we want to bring a qubit to its excited state and measure it. + The circuit: - The function that calculates the probability for this circuit, doesn't need any - calculation_parameters. It will be as following: + .. parsed-literal:: - .. code-block:: + ┌───┐┌─┐ + q: ┤ X ├┤M├ + └───┘└╥┘ + c: 1/══════╩═ + 0 - @staticmethod - def compute_probabilities(self, circuits: List[QuantumCircuit]) - -> List[Dict[str, float]]: - - output_dict_list = [] - for circuit in circuits: - probability_output_dict = {"1": 1.0, "0": 0.0} - output_dict_list.append(probability_output_dict) - return output_dict_list - - **3 qubit circuit** - In this experiment, we prepare a Bell state with the first and second qubit. - In addition, we will bring the third qubit to its excited state. - The circuit: - ┌───┐ ┌─┐ - q_0: ┤ H ├──■──┤M├─── - └───┘┌─┴─┐└╥┘┌─┐ - q_1: ─────┤ X ├─╫─┤M├ - ┌───┐└┬─┬┘ ║ └╥┘ - q_2: ┤ X ├─┤M├──╫──╫─ - └───┘ └╥┘ ║ ║ - c: 3/═══════╩═══╩══╩═ - 2 0 1 - - When an output string isn't in the probability dictionary, the backend will presume its - probability is 0. + The function that calculates the probability for this circuit doesn't need any + calculation parameters: - .. code-block:: + .. code-block:: + + @staticmethod + def compute_probabilities(self, circuits: List[QuantumCircuit]) + -> List[Dict[str, float]]: + + output_dict_list = [] + for circuit in circuits: + probability_output_dict = {"1": 1.0, "0": 0.0} + output_dict_list.append(probability_output_dict) + return output_dict_list + + **3 qubit circuit** + + In this experiment, we prepare a Bell state with the first and second qubit. + In addition, we will bring the third qubit to its excited state. + The circuit: + + .. parsed-literal:: + + ┌───┐ ┌─┐ + q_0: ┤ H ├──■──┤M├─── + └───┘┌─┴─┐└╥┘┌─┐ + q_1: ─────┤ X ├─╫─┤M├ + ┌───┐└┬─┬┘ ║ └╥┘ + q_2: ┤ X ├─┤M├──╫──╫─ + └───┘ └╥┘ ║ ║ + c: 3/═══════╩═══╩══╩═ + 2 0 1 + + When an output string isn't in the probability dictionary, the backend will + assume its probability is 0. + + .. code-block:: + + @staticmethod + def compute_probabilities(self, circuits: List[QuantumCircuit]) + -> List[Dict[str, float]]: - @staticmethod - def compute_probabilities(self, circuits: List[QuantumCircuit]) - -> List[Dict[str, float]]: - - output_dict_list = [] - for circuit in circuits: - probability_output_dict = {} - probability_output_dict["001"] = 0.5 - probability_output_dict["111"] = 0.5 - output_dict_list.append(probability_output_dict) - return output_dict_list + output_dict_list = [] + for circuit in circuits: + probability_output_dict = {} + probability_output_dict["001"] = 0.5 + probability_output_dict["111"] = 0.5 + output_dict_list.append(probability_output_dict) + return output_dict_list """ # pylint: disable=unused-argument diff --git a/qiskit_experiments/visualization/__init__.py b/qiskit_experiments/visualization/__init__.py index 682f77bcb1..b1f91d5d32 100644 --- a/qiskit_experiments/visualization/__init__.py +++ b/qiskit_experiments/visualization/__init__.py @@ -20,7 +20,7 @@ experiment and analysis results. This includes plotter and drawer classes to plot data in :class:`.CurveAnalysis` and its subclasses. Plotters inherit from :class:`BasePlotter` and define a type of figure that may be generated from experiment -or analysis data. For example, the results from :class:`CurveAnalysis`---or any other +or analysis data. For example, the results from :class:`.CurveAnalysis`---or any other experiment where results are plotted against a single parameter (i.e., :math:`x`)---can be plotted using the :class:`CurvePlotter` class, which plots X-Y-like values. diff --git a/qiskit_experiments/visualization/plotters/iq_plotter.py b/qiskit_experiments/visualization/plotters/iq_plotter.py index 4917dc5e51..a447b48ee7 100644 --- a/qiskit_experiments/visualization/plotters/iq_plotter.py +++ b/qiskit_experiments/visualization/plotters/iq_plotter.py @@ -31,7 +31,7 @@ class IQPlotter(BasePlotter): (subclass of :class:`.BaseDiscriminator`), which is used to classify IQ results into labels. The discriminator labels are matched with the series names to generate an image of the predictions. Points that are misclassified by the discriminator are - flagged in the figure (see ``flag_misclassified`` :attr:`option`). A canonical + flagged in the figure (see the ``flag_misclassified`` option). A canonical application of :class:`.IQPlotter` is for classification of single-qubit readout for different prepared states. @@ -196,7 +196,7 @@ def _default_options(cls) -> Options: Options: plot_discriminator (bool): Whether to plot an image showing the predictions - of the ``discriminator`` entry in :attr:`supplementary_data``. If True, + of the ``discriminator`` entry in :attr:`supplementary_data`. If True, the "discriminator" supplementary data entry must be set. discriminator_multiplier (float): The multiplier to use when computing the extent of the discriminator plot. The range of the series data is taken diff --git a/releasenotes/notes/0.3/cleanup-rb-experiment-f17b6e674ae4e473.yaml b/releasenotes/notes/0.3/cleanup-rb-experiment-f17b6e674ae4e473.yaml index 2261d7b908..396aba8bc6 100644 --- a/releasenotes/notes/0.3/cleanup-rb-experiment-f17b6e674ae4e473.yaml +++ b/releasenotes/notes/0.3/cleanup-rb-experiment-f17b6e674ae4e473.yaml @@ -1,7 +1,7 @@ --- features: - | - The curve fit parameter guess function :func:`~rb_decay` has been added. + The curve fit parameter guess function :func:`~.guess.rb_decay` has been added. This improves the initial parameter estimation of randomized benchmark experiments. upgrade: - | diff --git a/releasenotes/notes/0.3/curve-analysis-fixed-parameters-5915a29db1e2628b.yaml b/releasenotes/notes/0.3/curve-analysis-fixed-parameters-5915a29db1e2628b.yaml index f0701c689d..37a182a26a 100644 --- a/releasenotes/notes/0.3/curve-analysis-fixed-parameters-5915a29db1e2628b.yaml +++ b/releasenotes/notes/0.3/curve-analysis-fixed-parameters-5915a29db1e2628b.yaml @@ -13,11 +13,11 @@ deprecations: has been deprecated. Please set `fixed_parameters` option instead. This is a python dictionary of fixed parameter values keyed on the fit parameter names. - | - Analysis class :class:`FineDragAnalysis` has been deprecated. Now you can directly - set fixed parameters to the :class:`ErrorAmplificationAnalysis` instance as an analysis option. + Analysis class ``FineDragAnalysis`` has been deprecated. Now you can directly + set fixed parameters to the :class:`.ErrorAmplificationAnalysis` instance as an analysis option. - | - Analysis class :class:`FineFrequencyAnalysis` has been deprecated. Now you can directly - set fixed parameters to the :class:`ErrorAmplificationAnalysis` instance as an analysis option. + Analysis class ``FineFrequencyAnalysis`` has been deprecated. Now you can directly + set fixed parameters to the :class:`.ErrorAmplificationAnalysis` instance as an analysis option. - | - Analysis class :class:`FineHalfAngleAnalysis` has been deprecated. Now you can directly - set fixed parameters to the :class:`ErrorAmplificationAnalysis` instance as an analysis option. + Analysis class ``FineHalfAngleAnalysis`` has been deprecated. Now you can directly + set fixed parameters to the :class:`.ErrorAmplificationAnalysis` instance as an analysis option. diff --git a/releasenotes/notes/0.3/experiment_service_fixes-94730fd6bab83956.yaml b/releasenotes/notes/0.3/experiment_service_fixes-94730fd6bab83956.yaml index fcfc369120..16e040ef60 100644 --- a/releasenotes/notes/0.3/experiment_service_fixes-94730fd6bab83956.yaml +++ b/releasenotes/notes/0.3/experiment_service_fixes-94730fd6bab83956.yaml @@ -1,6 +1,6 @@ --- fixes: - | - :meth:`.ExperimentData.save()` should now fail gracefully when experiment metadata failed to save instead of crashing. + :meth:`.ExperimentData.save` should now fail gracefully when experiment metadata failed to save instead of crashing. - | The link to the experiment entry in the database service shown after saving is now by default obtained from the service, not hard-coded. diff --git a/releasenotes/notes/0.3/upgrade-curve-fit-4dc01b1db55ee398.yaml b/releasenotes/notes/0.3/upgrade-curve-fit-4dc01b1db55ee398.yaml index 021f4954ad..b65d0fac59 100644 --- a/releasenotes/notes/0.3/upgrade-curve-fit-4dc01b1db55ee398.yaml +++ b/releasenotes/notes/0.3/upgrade-curve-fit-4dc01b1db55ee398.yaml @@ -1,7 +1,7 @@ --- upgrade: - | - The :class:`CurveAnalysis` class has been updated to use the covariance between fit + The :class:`.CurveAnalysis` class has been updated to use the covariance between fit parameters in the error propagation. This will provide more accurate standard error for your fit values. - | diff --git a/releasenotes/notes/0.4/curve-analysis-02a702a81e014adf.yaml b/releasenotes/notes/0.4/curve-analysis-02a702a81e014adf.yaml index 520a276b31..aeb70b210a 100644 --- a/releasenotes/notes/0.4/curve-analysis-02a702a81e014adf.yaml +++ b/releasenotes/notes/0.4/curve-analysis-02a702a81e014adf.yaml @@ -41,7 +41,7 @@ features: - | ``plot_options`` has been added. This was conventionally included - in the :class:`SeriesDef` dataclass, which was static and not configurable. + in the :class:`.SeriesDef` dataclass, which was static and not configurable. Now end-user can update visual representation of curves through this option. This option is a dictionary that defines three properties, for example, diff --git a/releasenotes/notes/0.4/randomized_benchmarking-de55fda43765c34c.yaml b/releasenotes/notes/0.4/randomized_benchmarking-de55fda43765c34c.yaml index 2942065e4b..07e9d5ebd1 100644 --- a/releasenotes/notes/0.4/randomized_benchmarking-de55fda43765c34c.yaml +++ b/releasenotes/notes/0.4/randomized_benchmarking-de55fda43765c34c.yaml @@ -1,5 +1,6 @@ --- fixes: - | - Initial guess function for the randomized benchmarking analysis :func:`.rb_decay` has been - upgraded to give accurate estimate of the decay function base. + Initial guess function for the randomized benchmarking analysis + :func:`~.guess.rb_decay` has been upgraded to give accurate estimate of the decay + function base. diff --git a/releasenotes/notes/curve-analysis-4bcc10cf3a39a85d.yaml b/releasenotes/notes/curve-analysis-4bcc10cf3a39a85d.yaml index d6ded09461..baa00db00e 100644 --- a/releasenotes/notes/curve-analysis-4bcc10cf3a39a85d.yaml +++ b/releasenotes/notes/curve-analysis-4bcc10cf3a39a85d.yaml @@ -8,13 +8,13 @@ features: and no statistical difference has been introduced with introduction of this option. deprecations: - | - Providing data_sort_key directly to the LMFIT model to instantiate :class:`CurveAnalysis` + Providing data_sort_key directly to the LMFIT model to instantiate :class:`.CurveAnalysis` has been deprecated. This option is not officially supported by the LMFIT, and thus curve analysis cannot guarantee this option is properly managed in all LMFIT model subclasses. developer: - | - To map experiment result data to a particular LMFIT model in the :class:`CurveAnalysis`, + To map experiment result data to a particular LMFIT model in the :class:`.CurveAnalysis`, an author must provide the data_subfit_map analysis option rather than directly binding data_sort_key with the target LMFIT model. The data_subfit_map option is a dictionary keyed on the model name. For example, diff --git a/releasenotes/notes/ecr_lib-381cb18885e81abd.yaml b/releasenotes/notes/ecr_lib-381cb18885e81abd.yaml index 574a34f9c0..5b0133a7ae 100644 --- a/releasenotes/notes/ecr_lib-381cb18885e81abd.yaml +++ b/releasenotes/notes/ecr_lib-381cb18885e81abd.yaml @@ -1,7 +1,7 @@ --- features: - | - A new basis gate library called :class:`EchoedCrossResonance` is been added. + A new basis gate library called :class:`.EchoedCrossResonance` is been added. upgrade: - | The :class:`.Calibrations` class has been updated to use the reference diff --git a/releasenotes/notes/fix-matplotlib-3.6.0-failing-test-5a747f61a9c357b4.yaml b/releasenotes/notes/fix-matplotlib-3.6.0-failing-test-5a747f61a9c357b4.yaml index a7f8ddd4af..15b569cfd9 100644 --- a/releasenotes/notes/fix-matplotlib-3.6.0-failing-test-5a747f61a9c357b4.yaml +++ b/releasenotes/notes/fix-matplotlib-3.6.0-failing-test-5a747f61a9c357b4.yaml @@ -1,6 +1,6 @@ --- fixes: - | - Fix a bug where :class:`CurveAnalysis` tests would fail with matplotlib 3.6.0 owing to a deprecated - function call used in :class:`MplCurveDrawer`. The new :class:`MplCurveDrawer` no-longer uses the + Fix a bug where :class:`.CurveAnalysis` tests would fail with matplotlib 3.6.0 owing to a deprecated + function call used in :class:`MplCurveDrawer`. The new :class:`MplCurveDrawer` no longer uses the deprecated function. diff --git a/releasenotes/notes/tomography-b091ce13d6983bc1.yaml b/releasenotes/notes/tomography-b091ce13d6983bc1.yaml index 2e46cde734..32a76ca022 100644 --- a/releasenotes/notes/tomography-b091ce13d6983bc1.yaml +++ b/releasenotes/notes/tomography-b091ce13d6983bc1.yaml @@ -22,12 +22,12 @@ features: - | Adds an optional ``mitigator`` kwarg to :class:`.PauliMeasurementBasis` which can be used to initialize the basis with a - :class:`.LocalReadoutMitigator` to construct a readout error mitigated + :class:`~qiskit.result.LocalReadoutMitigator` to construct a readout error mitigated basis for use with :class:`.StateTomography` and :class:`.ProcessTomography` experiments. The :class:`.LocalReadoutError` experiment can be run to obtain the - :class:`.LocalReadoutMitigator` from its analysis results. + :class:`~qiskit.result.LocalReadoutMitigator` from its analysis results. - | Adds readout error mitigated tomography experiments :class:`.MitigatedStateTomography` and :class:`.MitigatedProcessTomography`. @@ -74,7 +74,7 @@ fixes: experiments where if the input circuit contained conditional instructions with multiple classical registers the tomography measurement circuits would contain incorrect conditionals due to a bug in the - :meth:`.QuantumCircuit.compose` method. + :meth:`qiskit.circuit.QuantumCircuit.compose` method. See Issue #942 `_ for additional details. @@ -82,7 +82,7 @@ upgrade: - | Renames the ``qubits``, ``measurement_qubits``, and ``preparation_qubits`` init kwargs of :class:`~.StateTomography`, - :class:`~.ProcessTomography`, and :class:`~.TomographyExperiment` to + :class:`~.ProcessTomography`, and :class:`.TomographyExperiment` to ``physical_qubits``, ``measurement_indices`` and ``preparation_indices`` respectively. This is to make the intended use of these kwargs more clear as the measurement and preparation args refer to the index of circuit @@ -101,7 +101,7 @@ deprecations: - | Renames the ``qubits``, ``measurement_qubits``, and ``preparation_qubits`` init kwargs of :class:`~.StateTomography`, - :class:`~.ProcessTomography`, and :class:`~.TomographyExperiment` have + :class:`~.ProcessTomography`, and :class:`.TomographyExperiment` have been deprecated. They have been replaced with kwargs ``physical_qubits``, ``measurement_indices`` and ``preparation_indices`` respectively. The renamed kwargs have the same functionality as the deprecated kwargs. diff --git a/requirements-dev.txt b/requirements-dev.txt index c52cf3e90f..19d6a12264 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -6,10 +6,10 @@ jinja2==3.0.3 sphinx~=5.0 jupyter-sphinx>=0.4.0 qiskit-sphinx-theme==1.11.0rc1 -sphinx-autodoc-typehints<=1.20.2 +sphinx-autodoc-typehints>=1.22.0 sphinx-design==0.3.0 pygments>=2.4 -reno>=3.4.0 +reno>=4.0.0 nbsphinx arxiv ddt>=1.6.0 diff --git a/test/curve_analysis/test_curve_fitting.py b/test/curve_analysis/test_curve_fitting.py index defbaa2fec..1c4bdb09f5 100644 --- a/test/curve_analysis/test_curve_fitting.py +++ b/test/curve_analysis/test_curve_fitting.py @@ -17,7 +17,7 @@ from qiskit import QuantumCircuit, transpile from qiskit.providers.basicaer import QasmSimulatorPy from qiskit_experiments.curve_analysis import curve_fit, multi_curve_fit, process_curve_data -from qiskit_experiments.curve_analysis.data_processing import ( +from qiskit_experiments.curve_analysis.utils import ( level2_probability, mean_xy_data, multi_mean_xy_data,