Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Development #190

Merged
merged 73 commits into from
Oct 21, 2022
Merged
Show file tree
Hide file tree
Changes from 72 commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
a0ddbed
Added sobol sensitivity
May 7, 2022
18ed2f2
Add unit tests for sobol & sensitivity baseclass
May 7, 2022
d2da7d1
Fixed minor typo
May 7, 2022
2ae7ceb
Added modules in sensitivity __init__.py
May 8, 2022
23c0607
Formatted conf.py with Black
May 8, 2022
5de396e
Added documentation for Sobol indices
May 8, 2022
c38b2e6
Added examples for Sobol indices
May 8, 2022
068fde5
Added Cramér-von Mises sensitivity index
May 8, 2022
f16288d
Add unit tests for Cramér-von Mises sensitivity
May 8, 2022
27305fc
Added documentation Cramér-von Mises sensitivity
May 8, 2022
4d4a012
Added examples Cramér-von Mises sensitivity
May 8, 2022
8ab4892
Fixed references in documentation
May 8, 2022
852be9d
Rearranged order in index
May 8, 2022
69ac38a
Minor fixes in docstrings
May 8, 2022
3f59fd8
Changed docstrings to match rtd_theme
May 8, 2022
92a724c
Added Chatterjee sensitivity index
May 8, 2022
c86aea2
Minor docstring fix
May 8, 2022
0b5a666
Added documentation Chatterjee sensitivity
May 8, 2022
e4407c6
Added examples Chatterjee sensitivity
May 8, 2022
e181e73
Add unit tests for Chatterjee sensitivity
May 8, 2022
1e6118f
Added generalised sobol sensitivity
May 9, 2022
3da45cf
Added documentation for generalised sobol indices
May 9, 2022
29b4727
Add unit tests for generalised sobol sensitivity
May 9, 2022
f15c7bb
Added examples generalised sobol sensitivity
May 9, 2022
21c0289
Improved documentation for Chatterjee indices
May 9, 2022
86dcd1a
Improved documentation for CramervonMises indices
May 9, 2022
d48552c
Improved documentation GeneralisedSobol indices
May 9, 2022
26d27db
Merge pull request #1 from SURGroup/Development
omniscientoctopus May 9, 2022
d842ec6
Improved documentation Sobol indices
May 9, 2022
12a5b82
Added NumpyIntArray to support int arrays as input
May 23, 2022
df66e93
Added Type Hints to Chatterjee module
May 23, 2022
3584406
Added Type Hints to CVM module
May 23, 2022
9fcda77
Added Type Hints to GSI module
May 23, 2022
13d229e
Added Type Hints to Sobol module
May 23, 2022
aff7d09
Added Type Hints to baseclass modules
May 23, 2022
4ceb33c
Changed Chatterjee module name to CamelCase
May 23, 2022
78ba451
Changed CVM module name to CamelCase
May 23, 2022
f9eec9a
Changed GSI module name to CamelCase
May 23, 2022
bfa2c53
Changed Sobol module name to CamelCase
May 23, 2022
e0ad8da
Changed baseclass module names to CamelCase
May 23, 2022
d78b28b
Changed variable name: CI -> confidence interval
May 23, 2022
caffb2d
Changed variable name: CI -> confidence interval
May 23, 2022
f3b5935
Added references to bibliography for Chatterjee
May 23, 2022
e3f0d7a
Added references to bibliography for CVM
May 23, 2022
168f7d3
Added references to bibliography for GSI
May 23, 2022
dd6393f
Added references to bibliography for Sobol
May 23, 2022
08d954f
Added PostProcess module for sensitivity indices.
Jun 6, 2022
d11e395
Added plots in Sobol examples
Jun 6, 2022
bb14e82
Added plots in Generalised Sobol examples
Jun 6, 2022
ef52484
Added plots in CVM index examples
Jun 6, 2022
564086b
Added plots in Chatterjee index examples
Jun 6, 2022
0216941
Added type hints to PostProcess module
Jun 6, 2022
624c373
Fixed minor typo
Jun 6, 2022
7da93b1
Added a basic comparison of sensitivity indices
Jun 6, 2022
08d9180
Minor fix
Jul 3, 2022
dfa17e9
Added convergence study
Jul 3, 2022
786c6e5
Changed a_values in Sobol func
Jul 3, 2022
af5e2a5
Minor fixes to docstring
Jul 3, 2022
ee1d892
Minor changes in documentation
Jul 3, 2022
e57f82d
Described why SA outputs are different
Jul 3, 2022
f4e2bb3
Merge pull request #186 from Uncertainty-Group-Braunschweig/Development
dimtsap Jul 19, 2022
02bbff7
Minor example fixes
dimtsap Jul 21, 2022
813ae54
Naming fixes
dimtsap Aug 9, 2022
0b0ee2f
Fixes failing tests after changing naming conventions
dimtsap Aug 9, 2022
7f3a9dc
Fixes failing examples due to changed naming conventions
dimtsap Aug 10, 2022
bc15e7f
Documentation fixes due to class and attribute renaming
dimtsap Aug 26, 2022
ec2f910
Fixes failing Sobol Sensitivity tests
dimtsap Aug 26, 2022
b82e53e
Very minor updates to the documentation for Chatterjee Sensitivity do…
mds2120 Oct 20, 2022
066e5a3
Update README.rst
dimtsap Oct 20, 2022
a79da94
Delete src/UQpy/surrogates/kriging directory
dimtsap Oct 20, 2022
bba0a67
Revert "Delete src/UQpy/surrogates/kriging directory"
dimtsap Oct 20, 2022
5dba9de
Merge pull request #189 from SURGroup/feature/new_sensitivities
dgiovanis Oct 20, 2022
0c523f1
Merge branch 'master' into Development
dgiovanis Oct 21, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 2 additions & 4 deletions README.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
|AzureDevops| |PyPIdownloads| |PyPI| |CondaSURG| |CondaPlatforms| |GithubRelease| |Binder| |Docs| |bear-ified|
|AzureDevops| |PyPIdownloads| |PyPI| |CondaPlatforms| |GithubRelease| |Binder| |Docs| |bear-ified|

.. |Docs| image:: https://img.shields.io/readthedocs/uqpy?style=plastic :alt: Read the Docs
.. |CondaSURG| image:: https://img.shields.io/conda/vn/SURG_JHU/uqpy?style=plastic :alt: Conda (channel only)
.. |CondaPlatforms| image:: https://img.shields.io/conda/pn/SURG_JHU/uqpy?style=plastic :alt: Conda
.. |GithubRelease| image:: https://img.shields.io/github/v/release/SURGroup/UQpy?style=plastic :alt: GitHub release (latest by date)
.. |AzureDevops| image:: https://img.shields.io/azure-devops/build/UQpy/5ce1851f-e51f-4e18-9eca-91c3ad9f9900/1?style=plastic :alt: Azure DevOps builds
Expand Down Expand Up @@ -35,7 +34,7 @@ Uncertainty Quantification with python (UQpy)
+ + +
| | Promit Chakroborty, Lukáš Novák, Andrew Solanto |
+-----------------------+------------------------------------------------------------------+
| **Contributors:** | Michael Gardner |
| **Contributors:** | Michael Gardner, Prateek Bhustali, Julius Schultz, Ulrich Römer |
+-----------------------+------------------------------------------------------------------+

Contact
Expand Down Expand Up @@ -91,7 +90,6 @@ Using Conda
* ::

conda install -c conda-forge uqpy
conda install -c surg_jhu uqpy (latest version)

Clone your fork of the UQpy repo from your GitHub account to your local disk (to get the latest version):

Expand Down
24 changes: 12 additions & 12 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -152,18 +152,18 @@ jobs:
displayName: Install Anaconda packages
condition: eq(variables['Build.SourceBranch'], 'refs/heads/master')

- bash: |
source activate myEnvironment
conda build . recipe --variants "{'version': ['$(GitVersion.SemVer)']}"
displayName: Build Noarch conda packages
condition: eq(variables['Build.SourceBranch'], 'refs/heads/master')

- bash: |
source activate myEnvironment
anaconda login --username $(ANACONDAUSER) --password $(ANACONDAPW)
anaconda upload /usr/local/miniconda/envs/myEnvironment/conda-bld/noarch/*.tar.bz2
displayName: Upload conda packages
condition: eq(variables['Build.SourceBranch'], 'refs/heads/master')
# - bash: |
# source activate myEnvironment
# conda build . recipe --variants "{'version': ['$(GitVersion.SemVer)']}"
# displayName: Build Noarch conda packages
# condition: eq(variables['Build.SourceBranch'], 'refs/heads/master')
#
# - bash: |
# source activate myEnvironment
# anaconda login --username $(ANACONDAUSER) --password $(ANACONDAPW)
# anaconda upload /usr/local/miniconda/envs/myEnvironment/conda-bld/noarch/*.tar.bz2
# displayName: Upload conda packages
# condition: eq(variables['Build.SourceBranch'], 'refs/heads/master')

- job: "Create_Docker_images"
dependsOn: Build_UQpy_and_run_tests
Expand Down
105 changes: 9 additions & 96 deletions docs/code/inference/bayes_model_selection/bayes_model_selection.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,9 @@
error_covariance = var_n * np.eye(50)
print(param_true.shape)

z = RunModel(samples=param_true, model_script='local_pfn_models.py', model_object_name='model_quadratic', vec=False,
var_names=['theta_1', 'theta_2'])
from UQpy.run_model.model_execution.PythonModel import PythonModel
m=PythonModel(model_script='local_pfn_models.py', model_object_name='model_quadratic', var_names=['theta_1', 'theta_2'])
z = RunModel(samples=param_true, model=m)
data_clean = z.qoi_list[0].reshape((-1,))
data = data_clean + Normal(scale=np.sqrt(var_n)).rvs(nsamples=data_clean.size, random_state=456).reshape((-1,))

Expand All @@ -66,55 +67,10 @@
model_prior_stds = [[10.], [1., 1.], [1., 2., 0.25]]


evidences = []
model_posterior_means = []
model_posterior_stds = []
for n, model in enumerate(model_names):
# compute matrix X
X = np.linspace(0, 10, 50).reshape((-1, 1))
if n == 1: # quadratic model
X = np.concatenate([X, X ** 2], axis=1)
if n == 2: # cubic model
X = np.concatenate([X, X ** 2, X ** 3], axis=1)

# compute posterior pdf
m_prior = np.array(model_prior_means[n]).reshape((-1, 1))
S_prior = np.diag(np.array(model_prior_stds[n]) ** 2)
S_posterior = np.linalg.inv(1 / var_n * np.matmul(X.T, X) + np.linalg.inv(S_prior))
m_posterior = np.matmul(S_posterior,
1 / var_n * np.matmul(X.T, data.reshape((-1, 1))) + np.matmul(np.linalg.inv(S_prior),
m_prior))
m_prior = m_prior.reshape((-1,))
m_posterior = m_posterior.reshape((-1,))
model_posterior_means.append(list(m_posterior))
model_posterior_stds.append(list(np.sqrt(np.diag(S_posterior))))
print('posterior mean and covariance for ' + model)
print(m_posterior, S_posterior)

# compute evidence, evaluate the formula at the posterior mean
like_theta = multivariate_normal.pdf(data, mean=np.matmul(X, m_posterior).reshape((-1,)), cov=error_covariance)
prior_theta = multivariate_normal.pdf(m_posterior, mean=m_prior, cov=S_prior)
posterior_theta = multivariate_normal.pdf(m_posterior, mean=m_posterior, cov=S_posterior)
evidence = like_theta * prior_theta / posterior_theta
evidences.append(evidence)
print('evidence for ' + model + '= {}\n'.format(evidence))

# compute the posterior probability of each model
tmp = [1 / 3 * evidence for evidence in evidences]
model_posterior_probas = [p / sum(tmp) for p in tmp]

print('posterior probabilities of all three models')
print(model_posterior_probas)

#%% md
#
# Define the models for use in UQpy

#%%

candidate_models = []
for n, model_name in enumerate(model_names):
run_model = RunModel(model_script='local_pfn_models.py', model_object_name=model_name, vec=False)
m=PythonModel(model_script='local_pfn_models.py', model_object_name=model_name,)
run_model = RunModel(model=m)
prior = JointIndependent([Normal(loc=m, scale=std) for m, std in
zip(model_prior_means[n], model_prior_stds[n])])
model = ComputationalModel(n_parameters=model_n_params[n],
Expand All @@ -123,60 +79,18 @@
name=model_name)
candidate_models.append(model)


#%% md
#
# Run MCMC for one model

#%%

# Quadratic model
sampling = MetropolisHastings(args_target=(data, ),
log_pdf_target=candidate_models[1].evaluate_log_posterior,
jump=10, burn_length=100,
proposal=JointIndependent([Normal(scale=0.1), ] * 2),
seed=[0., 0.], random_state=123)

bayesMCMC = BayesParameterEstimation(sampling_class=sampling,
inference_model=candidate_models[1],
data=data,
nsamples=3500)

# plot prior, true posterior and estimated posterior
fig, ax = plt.subplots(1, 2, figsize=(16, 5))
for n_p in range(2):
domain_plot = np.linspace(-0.5, 3, 200)
ax[n_p].plot(domain_plot, norm.pdf(domain_plot, loc=model_prior_means[1][n_p], scale=model_prior_stds[1][n_p]),
label='prior', color='green', linestyle='--')
ax[n_p].plot(domain_plot, norm.pdf(domain_plot, loc=model_posterior_means[1][n_p],
scale=model_posterior_stds[1][n_p]),
label='true posterior', color='red', linestyle='-')
ax[n_p].hist(bayesMCMC.sampler.samples[:, n_p], density=True, bins=30, label='estimated posterior MCMC')
ax[n_p].legend()
ax[n_p].set_title('MCMC for quadratic model')
plt.show()


#%% md
#
# Run Bayesian Model Selection for all three models

#%%

proposals = [Normal(scale=0.1),
JointIndependent([Normal(scale=0.1), Normal(scale=0.1)]),
JointIndependent([Normal(scale=0.15), Normal(scale=0.1), Normal(scale=0.05)])]
nsamples = [2000, 6000, 14000]
nburn = [500, 2000, 4000]
jump = [5, 10, 25]
nsamples = [2000, 2000, 2000]
nburn = [1000, 1000, 1000]
jump = [2, 2, 2]


sampling_inputs=list()
estimators = []
for i in range(3):
sampling = MetropolisHastings(args_target=(data, ),
log_pdf_target=candidate_models[i].evaluate_log_posterior,
jump=jump[i],
sampling = MetropolisHastings(jump=jump[i],
burn_length=nburn[i],
proposal=proposals[i],
seed=model_prior_means[i],
Expand All @@ -185,7 +99,6 @@
sampling_class=sampling))

selection = BayesModelSelection(parameter_estimators=estimators,
data=data,
prior_probabilities=[1. / 3., 1. / 3., 1. / 3.],
nsamples=nsamples)

Expand Down
2 changes: 1 addition & 1 deletion docs/code/sampling/mcmc/plot_mcmc_metropolis_hastings.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ def log_rosenbrock_with_param(x, p):
plt.show()

plt.figure()
x = MetropolisHastings(dimension=2, pdf_target=log_rosenbrock_with_param, burn_length=500,
x = MetropolisHastings(dimension=2, log_pdf_target=log_rosenbrock_with_param, burn_length=500,
jump=50, n_chains=1, args_target=(20,),
nsamples=500)
plt.plot(x.samples[:, 0], x.samples[:, 1], 'o')
Expand Down
15 changes: 15 additions & 0 deletions docs/code/sensitivity/chatterjee/README.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
Chatterjee indices
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
These examples serve as a guide for using the Chatterjee sensitivity module. They have been taken from various papers to enable validation of the implementation and have been referenced accordingly.

1. **Ishigami function**

In addition to the Pick and Freeze scheme, the Sobol indices can be estimated using the rank statistics approach :cite:`gamboa2020global`. We demonstrate this estimation of the Sobol indices using the Ishigami function.

2. **Exponential function**

For the Exponential model, analytical Cramér-von Mises indices are available :cite:`CVM` and since they are equivalent to the Chatterjee indices in the sample limit, they are shown here.

3. **Sobol function**

This example was considered in :cite:`gamboa2020global` (page 18) to compare the Pick and Freeze scheme with the rank statistics approach for estimating the Sobol indices.
76 changes: 76 additions & 0 deletions docs/code/sensitivity/chatterjee/chatterjee_exponential.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
"""

Exponential function
==============================================

The exponential function was used in [1]_ to demonstrate the
Cramér-von Mises indices. Chattererjee indices approach the Cramér-von Mises
indices in the sample limit and will be demonstrated via this example.

.. math::
f(x) := \exp(x_1 + 2x_2), \quad x_1, x_2 \sim \mathcal{N}(0, 1)

.. [1] Gamboa, F., Klein, T., & Lagnoux, A. (2018). Sensitivity Analysis Based on \
Cramér-von Mises Distance. SIAM/ASA Journal on Uncertainty Quantification, 6(2), \
522-548. doi:10.1137/15M1025621. (`Link <https://doi.org/10.1137/15M1025621>`_)

"""

# %%
from UQpy.run_model.RunModel import RunModel
from UQpy.run_model.model_execution.PythonModel import PythonModel
from UQpy.distributions import Normal
from UQpy.distributions.collection.JointIndependent import JointIndependent
from UQpy.sensitivity.ChatterjeeSensitivity import ChatterjeeSensitivity
from UQpy.sensitivity.PostProcess import *

np.random.seed(123)

# %% [markdown]
# **Define the model and input distributions**

# Create Model object
model = PythonModel(
model_script="local_exponential.py",
model_object_name="evaluate",
var_names=[
"X_1",
"X_2",
],
delete_files=True,
)

runmodel_obj = RunModel(model=model)

# Define distribution object
dist_object = JointIndependent([Normal(0, 1)] * 2)

# %% [markdown]
# **Compute Chatterjee indices**

# %% [markdown]
SA = ChatterjeeSensitivity(runmodel_obj, dist_object)

# Compute Chatterjee indices using the pick and freeze algorithm
SA.run(n_samples=1_000_000)

# %% [markdown]
# **Chattererjee indices**
#
# Chattererjee indices approach the Cramér-von Mises indices in the sample limit.
#
# Expected value of the sensitivity indices:
#
# :math:`S^1_{CVM} = \frac{6}{\pi} \operatorname{arctan}(2) - 2 \approx 0.1145`
#
# :math:`S^2_{CVM} = \frac{6}{\pi} \operatorname{arctan}(\sqrt{19}) - 2 \approx 0.5693`

# %%
SA.first_order_chatterjee_indices

# **Plot the Chatterjee indices**
fig1, ax1 = plot_sensitivity_index(
SA.first_order_chatterjee_indices[:, 0],
plot_title="Chatterjee indices",
color="C2",
)
Loading