Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare release #2300

Merged
merged 9 commits into from
Dec 27, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,9 @@ doc/source/user_guide/wrappers/*
!doc/source/user_guide/wrappers/*.md

# eight-schools files
doc/source/getting_started/schools*
!doc/source/user_guide/wrappers/schools.stan
!doc/source/user_guide/wrappers/schools.json
doc/source/getting_started/eight_school*
doc/source/getting_started/sample_data/

Expand Down
10 changes: 6 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,23 @@
# Change Log

## v0.x.x Unreleased
## v0.17.0 (2023 Dec 22)

### New features
- Add prior sensitivity diagnostic `psens` ([2093](https://github.com/arviz-devs/arviz/pull/2093))
- Add filter_vars functionality to `InfereceData.to_dataframe`method ([2277](https://github.com/arviz-devs/arviz/pull/2277))
- Add filter_vars functionality to `InfereceData.to_dataframe`method ([2277](https://github.com/arviz-devs/arviz/pull/2277))

### Maintenance and fixes

- Update requirements: matplotlib>=3.5, pandas>=1.4.0, numpy>=1.22.0 ([2280](https://github.com/arviz-devs/arviz/pull/2280))
- Fix behaviour of `plot_ppc` when dimension order isn't `chain, draw, ...` ([2283](https://github.com/arviz-devs/arviz/pull/2283))
- Avoid repeating the variable name in `plot_ppc`, `plot_bpv`, `plot_loo_pit`... when repeated. ([2283](https://github.com/arviz-devs/arviz/pull/2283))
- Add support for the latest CmdStanPy. ([2287](https://github.com/arviz-devs/arviz/pull/2287))

### Deprecation
- Fix import error on windows due to missing encoding argument ([2300](https://github.com/arviz-devs/arviz/pull/2300))
- Add ``__delitem__`` method to InferenceData ([2292](https://github.com/arviz-devs/arviz/pull/2292))

### Documentation
- Improve the docstring of `psislw` ([2300](https://github.com/arviz-devs/arviz/pull/2300))
- Rerun the quickstart and working with InferenceData notebooks ([2300](https://github.com/arviz-devs/arviz/pull/2300))

- Several fixes in `plot_ppc` docstring ([2283](https://github.com/arviz-devs/arviz/pull/2283))

Expand Down
2 changes: 1 addition & 1 deletion arviz/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# pylint: disable=wildcard-import,invalid-name,wrong-import-position
"""ArviZ is a library for exploratory analysis of Bayesian models."""
__version__ = "0.17.0.dev0"
__version__ = "0.17.0"

import logging
import os
Expand Down
2 changes: 1 addition & 1 deletion arviz/plots/bpvplot.py
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ def plot_bpv(
Notes
-----
Discrete data is smoothed before computing either p-values or u-values using the
function :func:`smooth_data`
function :func:`~arviz.smooth_data`
Examples
--------
Expand Down
1 change: 1 addition & 0 deletions arviz/stats/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
"autocorr",
"autocov",
"make_ufunc",
"smooth_data",
"wrap_xarray_ufunc",
"reloo",
"_calculate_ics",
Expand Down
13 changes: 7 additions & 6 deletions arviz/stats/stats.py
Original file line number Diff line number Diff line change
Expand Up @@ -880,17 +880,18 @@ def psislw(log_weights, reff=1.0):
Parameters
----------
log_weights: array
log_weights : DataArray or (..., N) array-like
Array of size (n_observations, n_samples)
reff: float
reff : float, default 1
relative MCMC efficiency, ``ess / n``
Returns
-------
lw_out: array
Smoothed log weights
kss: array
Pareto tail indices
lw_out : DataArray or (..., N) ndarray
Smoothed, truncated and normalized log weights.
kss : DataArray or (...) ndarray
Estimates of the shape parameter *k* of the generalized Pareto
distribution.
References
----------
Expand Down
22 changes: 20 additions & 2 deletions arviz/stats/stats_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
from .density_utils import histogram as _histogram


__all__ = ["autocorr", "autocov", "ELPDData", "make_ufunc", "wrap_xarray_ufunc"]
__all__ = ["autocorr", "autocov", "ELPDData", "make_ufunc", "smooth_data", "wrap_xarray_ufunc"]


def autocov(ary, axis=-1):
Expand Down Expand Up @@ -564,7 +564,25 @@ def _circular_standard_deviation(samples, high=2 * np.pi, low=0, skipna=False, a


def smooth_data(obs_vals, pp_vals):
"""Smooth data, helper function for discrete data in plot_pbv, loo_pit and plot_loo_pit."""
"""Smooth data using a cubic spline.
Helper function for discrete data in plot_pbv, loo_pit and plot_loo_pit.
Parameters
----------
obs_vals : (N) array-like
Observed data
pp_vals : (S, N) array-like
Posterior predictive samples. ``N`` is the number of observations,
and ``S`` is the number of samples (generally n_chains*n_draws).
Returns
-------
obs_vals : (N) ndarray
Smoothed observed data
pp_vals : (S, N) ndarray
Smoothed posterior predictive samples
"""
x = np.linspace(0, 1, len(obs_vals))
csi = CubicSpline(x, obs_vals)
obs_vals = csi(np.linspace(0.01, 0.99, len(obs_vals)))
Expand Down
2 changes: 1 addition & 1 deletion arviz/tests/base_tests/test_plots_matplotlib.py
Original file line number Diff line number Diff line change
Expand Up @@ -1921,7 +1921,7 @@ def test_plot_ts(kwargs):
dims={"y": ["obs_dim"], "z": ["pred_dim"]},
)

ax = plot_ts(idata=idata, y="y", show=True, **kwargs)
ax = plot_ts(idata=idata, y="y", **kwargs)
assert np.all(ax)


Expand Down
3 changes: 2 additions & 1 deletion arviz/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -668,7 +668,8 @@ def _load_static_files():
Clone from xarray.core.formatted_html_template.
"""
return [
importlib.resources.files("arviz").joinpath(fname).read_text() for fname in STATIC_FILES
importlib.resources.files("arviz").joinpath(fname).read_text(encoding="utf-8")
for fname in STATIC_FILES
]


Expand Down
2 changes: 1 addition & 1 deletion doc/source/_static/custom.css
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ html[data-theme="dark"] {
}
.primary-button {
background-color: var(--pst-color-primary);
color: var(--pst-color-background);
color: var(--pst-color-background) !important;
}
.secondary-button {
background-color: var(--pst-color-background);
Expand Down
1 change: 1 addition & 0 deletions doc/source/api/stats_utils.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,4 +11,5 @@ Stats utils
autocov
autocorr
make_ufunc
smooth_data
wrap_xarray_ufunc
5 changes: 3 additions & 2 deletions doc/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,15 +99,16 @@
autodoc_typehints = "none"
numpydoc_xref_param_type = True
numpydoc_xref_ignore = {
"of", "or", "optional", "default", "1D", "2D", "3D", "n-dimensional", "M", "N", "K",
"of", "or", "optional", "default", "1D", "2D", "3D", "n-dimensional", "K", "M", "N", "S",
}
numpydoc_xref_aliases = {
"DataArray": ":class:`~xarray.DataArray`",
"Dataset": ":class:`~xarray.Dataset`",
"Labeller": ":ref:`Labeller <labeller_api>`",
"ndarray": ":class:`~numpy.ndarray`",
"InferenceData": ":class:`~arviz.InferenceData`",
"matplotlib_axes": ":class:`matplotlib Axes <matplotlib.axes.Axes>`",
"bokeh_figure": ":class:`Bokeh Figure <bokeh.plotting.figure>`",

}

# The base toctree document.
Expand Down
3,906 changes: 745 additions & 3,161 deletions doc/source/getting_started/Introduction.ipynb

Large diffs are not rendered by default.

10,735 changes: 8,371 additions & 2,364 deletions doc/source/getting_started/WorkingWithInferenceData.ipynb

Large diffs are not rendered by default.

5 changes: 5 additions & 0 deletions doc/source/getting_started/schools.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"J": 8,
"y": [28, 8, -3, 7, -1, 1, 18, 12],
"sigma": [15, 10, 16, 11, 9, 11, 10, 18]
}
26 changes: 26 additions & 0 deletions doc/source/getting_started/schools.stan
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
data {
int<lower=0> J;
array[J] real y;
array[J] real<lower=0> sigma;
}

parameters {
real mu;
real<lower=0> tau;
array[J] real theta;
}

model {
mu ~ normal(0, 5);
tau ~ cauchy(0, 5);
theta ~ normal(mu, tau);
y ~ normal(theta, sigma);
}
generated quantities {
array[J] real log_lik;
array[J] real y_hat;
for (j in 1:J) {
log_lik[j] = normal_lpdf(y[j] | theta[j], sigma[j]);
y_hat[j] = normal_rng(theta[j], sigma[j]);
}
}