From 8ff05ff459cd7f3fecc1c3c3317b43b5ec5d29fb Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Wed, 10 Apr 2024 16:52:05 +0200 Subject: [PATCH 01/22] proofread readme.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 5033128bc..cc73194c7 100644 --- a/README.md +++ b/README.md @@ -135,7 +135,7 @@ Alternatively, we could use a broadcasting syntax. end ``` -As you can see, `RxInfer` offers a model specification syntax that resembles closely to the mathematical equations defined above. We use `datavar` function to create "clamped" variables that take specific values at a later date. $\theta \sim \mathrm{Beta}(2.0, 7.0)$ expression creates random variable $θ$ and assigns it as an output of $\mathrm{Beta}$ node in the corresponding FFG. +As you can see, `RxInfer` offers a model specification syntax that resembles closely to the mathematical equations defined above. The $\theta \sim \mathrm{Beta}(2.0, 7.0)$ expression creates random variable $θ$ and assigns it as an output of $\mathrm{Beta}$ node in the corresponding FFG. > [!NOTE] > `RxInfer.jl` uses `GraphPPL.jl` for model and constraints specification. `GraphPPL.jl` API has been changed in version `4.0.0`. See [Migration Guide](https://reactivebayes.github.io/GraphPPL.jl/stable/) for more details. From cee6c5c809eb3e135573f5ea5cde9507d7d668b5 Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Wed, 10 Apr 2024 16:58:06 +0200 Subject: [PATCH 02/22] Proofread index.md --- docs/src/index.md | 21 +++++++-------------- 1 file changed, 7 insertions(+), 14 deletions(-) diff --git a/docs/src/index.md b/docs/src/index.md index 5be214c1e..562ac1fa9 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -27,29 +27,22 @@ Given a probabilistic model, RxInfer allows for an efficient message-passing bas ## Why RxInfer -Many important AI applications, including audio processing, self-driving vehicles, weather forecasting, and extended-reality video processing require continually solving an inference task in sophisticated probabilistic models with a large number of latent variables. -Often, the inference task in these applications must be performed continually and in real-time in response to new observations. -Popular MC-based inference methods, such as the No U-Turn Sampler (NUTS) or Hamiltonian Monte Carlo (HMC) sampling, rely on computationally heavy sampling procedures that do not scale well to probabilistic models with thousands of latent states. -Therefore, while MC-based inference is an very versatile tool, it is practically not suitable for real-time applications. -While the alternative variational inference method (VI) promises to scale better to large models than sampling-based inference, VI requires the derivation of gradients of a "variational Free Energy" cost function. -For large models, manual derivation of these gradients might not be feasible, while automated "black-box" gradient methods do not scale either because they are not capable of taking advantage of sparsity or conjugate pairs in the model. -Therefore, while Bayesian inference is known as the optimal data processing framework, in practice, real-time AI applications rely on much simpler, often ad hoc, data processing algorithms. - -RxInfer aims to remedy these issues by running efficient Bayesian inference in sophisticated probabilistic models, -taking advantage of local conjugate relationships in probabilistic models, and focusing on real-time Bayesian inference in large state-space models with thousands of latent variables. In addition, RxInfer provides a straightforward way to extend its functionality with custom factor nodes and message passing update rules. The engine is capable of running -various Bayesian inference algorithms in different parts of the factor graph of a single probabilistic model. This makes it easier -to explore different "what-if" scenarios and enables very efficient inference in specific cases. +Many important AI applications, including audio processing, self-driving vehicles, weather forecasting, and extended-reality video processing require continually solving an inference task in sophisticated probabilistic models with a large number of latent variables. Often, the inference task in these applications must be performed continually and in real-time in response to new observations. + +Popular MC-based inference methods, such as the No U-Turn Sampler (NUTS) or Hamiltonian Monte Carlo (HMC) sampling, rely on computationally heavy sampling procedures that do not scale well to probabilistic models with thousands of latent states. Therefore, while MC-based inference is an very versatile tool, it is practically not suitable for real-time applications. While the alternative variational inference method (VI) promises to scale better to large models than sampling-based inference, VI requires the derivation of gradients of a "Variational Free Energy" cost function. For large models, manual derivation of these gradients might not be feasible, while automated "black-box" gradient methods do not scale either because they are not capable of taking advantage of sparsity or conjugate pairs in the model. Therefore, while Bayesian inference is known as the optimal data processing framework, in practice, real-time AI applications rely on much simpler, often ad hoc, data processing algorithms. + +RxInfer aims to remedy these issues by running efficient Bayesian inference in sophisticated probabilistic models, taking advantage of local conjugate relationships in probabilistic models, and focusing on real-time Bayesian inference in large state-space models with thousands of latent variables. In addition, RxInfer provides a straightforward way to extend its functionality with custom factor nodes and message passing update rules. The engine is capable of running various Bayesian inference algorithms in different parts of the factor graph of a single probabilistic model. This makes it easier to explore different "what-if" scenarios and enables very efficient inference in specific cases. ## Package Features - User friendly syntax for specification of probabilistic models, achieved with [`GraphPPL`](https://github.com/ReactiveBayes/GraphPPL.jl). - Support for hybrid models combining discrete and continuous latent variables. - - Factorisation and functional form constraints specification. + - Factorization and functional form constraints specification. - Graph visualisation and extensions with different custom plugins. - Saving graph on a disk and re-loading it later on. - Automatic generation of message passing algorithms, achieved with [`ReactiveMP`](https://github.com/ReactiveBayes/ReactiveMP.jl). - Support for hybrid distinct message passing inference algorithm under a unified paradigm. - - Evaluation of Bethe free energy as a model performance measure. + - Evaluation of Bethe Free Energy as a model performance measure. - Schedule-free reactive message passing API. - Scalability for large models with millions of parameters and observations. - High performance. From 249da92a371abfc1c1aefd5c9237cf580365bd20 Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Wed, 10 Apr 2024 16:52:05 +0200 Subject: [PATCH 03/22] proofread readme.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index bbab7341f..1db86b93b 100644 --- a/README.md +++ b/README.md @@ -135,7 +135,7 @@ Alternatively, we could use a broadcasting syntax. end ``` -As you can see, `RxInfer` offers a model specification syntax that resembles closely to the mathematical equations defined above. We use `datavar` function to create "clamped" variables that take specific values at a later date. $\theta \sim \mathrm{Beta}(2.0, 7.0)$ expression creates random variable $θ$ and assigns it as an output of $\mathrm{Beta}$ node in the corresponding FFG. +As you can see, `RxInfer` offers a model specification syntax that resembles closely to the mathematical equations defined above. The $\theta \sim \mathrm{Beta}(2.0, 7.0)$ expression creates random variable $θ$ and assigns it as an output of $\mathrm{Beta}$ node in the corresponding FFG. > [!NOTE] > `RxInfer.jl` uses `GraphPPL.jl` for model and constraints specification. `GraphPPL.jl` API has been changed in version `4.0.0`. See [Migration Guide](https://reactivebayes.github.io/GraphPPL.jl/stable/) for more details. From c60d0e60cabaa4e33134d215857f8eb72e2e3e33 Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Wed, 10 Apr 2024 16:58:06 +0200 Subject: [PATCH 04/22] Proofread index.md --- docs/src/index.md | 21 +++++++-------------- 1 file changed, 7 insertions(+), 14 deletions(-) diff --git a/docs/src/index.md b/docs/src/index.md index 5be214c1e..562ac1fa9 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -27,29 +27,22 @@ Given a probabilistic model, RxInfer allows for an efficient message-passing bas ## Why RxInfer -Many important AI applications, including audio processing, self-driving vehicles, weather forecasting, and extended-reality video processing require continually solving an inference task in sophisticated probabilistic models with a large number of latent variables. -Often, the inference task in these applications must be performed continually and in real-time in response to new observations. -Popular MC-based inference methods, such as the No U-Turn Sampler (NUTS) or Hamiltonian Monte Carlo (HMC) sampling, rely on computationally heavy sampling procedures that do not scale well to probabilistic models with thousands of latent states. -Therefore, while MC-based inference is an very versatile tool, it is practically not suitable for real-time applications. -While the alternative variational inference method (VI) promises to scale better to large models than sampling-based inference, VI requires the derivation of gradients of a "variational Free Energy" cost function. -For large models, manual derivation of these gradients might not be feasible, while automated "black-box" gradient methods do not scale either because they are not capable of taking advantage of sparsity or conjugate pairs in the model. -Therefore, while Bayesian inference is known as the optimal data processing framework, in practice, real-time AI applications rely on much simpler, often ad hoc, data processing algorithms. - -RxInfer aims to remedy these issues by running efficient Bayesian inference in sophisticated probabilistic models, -taking advantage of local conjugate relationships in probabilistic models, and focusing on real-time Bayesian inference in large state-space models with thousands of latent variables. In addition, RxInfer provides a straightforward way to extend its functionality with custom factor nodes and message passing update rules. The engine is capable of running -various Bayesian inference algorithms in different parts of the factor graph of a single probabilistic model. This makes it easier -to explore different "what-if" scenarios and enables very efficient inference in specific cases. +Many important AI applications, including audio processing, self-driving vehicles, weather forecasting, and extended-reality video processing require continually solving an inference task in sophisticated probabilistic models with a large number of latent variables. Often, the inference task in these applications must be performed continually and in real-time in response to new observations. + +Popular MC-based inference methods, such as the No U-Turn Sampler (NUTS) or Hamiltonian Monte Carlo (HMC) sampling, rely on computationally heavy sampling procedures that do not scale well to probabilistic models with thousands of latent states. Therefore, while MC-based inference is an very versatile tool, it is practically not suitable for real-time applications. While the alternative variational inference method (VI) promises to scale better to large models than sampling-based inference, VI requires the derivation of gradients of a "Variational Free Energy" cost function. For large models, manual derivation of these gradients might not be feasible, while automated "black-box" gradient methods do not scale either because they are not capable of taking advantage of sparsity or conjugate pairs in the model. Therefore, while Bayesian inference is known as the optimal data processing framework, in practice, real-time AI applications rely on much simpler, often ad hoc, data processing algorithms. + +RxInfer aims to remedy these issues by running efficient Bayesian inference in sophisticated probabilistic models, taking advantage of local conjugate relationships in probabilistic models, and focusing on real-time Bayesian inference in large state-space models with thousands of latent variables. In addition, RxInfer provides a straightforward way to extend its functionality with custom factor nodes and message passing update rules. The engine is capable of running various Bayesian inference algorithms in different parts of the factor graph of a single probabilistic model. This makes it easier to explore different "what-if" scenarios and enables very efficient inference in specific cases. ## Package Features - User friendly syntax for specification of probabilistic models, achieved with [`GraphPPL`](https://github.com/ReactiveBayes/GraphPPL.jl). - Support for hybrid models combining discrete and continuous latent variables. - - Factorisation and functional form constraints specification. + - Factorization and functional form constraints specification. - Graph visualisation and extensions with different custom plugins. - Saving graph on a disk and re-loading it later on. - Automatic generation of message passing algorithms, achieved with [`ReactiveMP`](https://github.com/ReactiveBayes/ReactiveMP.jl). - Support for hybrid distinct message passing inference algorithm under a unified paradigm. - - Evaluation of Bethe free energy as a model performance measure. + - Evaluation of Bethe Free Energy as a model performance measure. - Schedule-free reactive message passing API. - Scalability for large models with millions of parameters and observations. - High performance. From 325fbd85f1a197e7500c2fe61eea81eb0c20a2e0 Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 15:22:52 +0200 Subject: [PATCH 05/22] fix inconsistent docs in functional forms --- docs/src/library/functional-forms.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/library/functional-forms.md b/docs/src/library/functional-forms.md index aa4b8e7fc..f2170ee63 100644 --- a/docs/src/library/functional-forms.md +++ b/docs/src/library/functional-forms.md @@ -100,7 +100,7 @@ form_constraint.fixed_value = Gamma(1.0, 1.0) ## [CompositeFormConstraint](@id lib-forms-composite-constraint) -It is possible to create a composite functional form constraint with the `+` operator, e.g: +It is possible to create a composite functional form by stacking operators, e.g: ```@example constraints-functional-forms @constraints begin From 9871e0b785b387bad6a154c42d6b127f4dc251b0 Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 15:28:00 +0200 Subject: [PATCH 06/22] fix typo in model construction page --- docs/src/library/model-construction.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/library/model-construction.md b/docs/src/library/model-construction.md index 0776eebfc..5151ae3ee 100644 --- a/docs/src/library/model-construction.md +++ b/docs/src/library/model-construction.md @@ -11,7 +11,7 @@ Also read the [_Model Specification_](@ref user-guide-model-specification) guide ## [`@model` macro](@id lib-model-construction-model-macro) -`RxInfer` operates with so-called [graphical probabilistic models](https://en.wikipedia.org/wiki/Graphical_model), more specifically [factor graphs](https://en.wikipedia.org/wiki/Factor_graph). Working with graphs directly is, however, tedius and error-prone, especially for large models. To simplify the process, `RxInfer` exports the `@model` macro, which translates a textual description of a probabilistic model into a corresponding factor graph representation. +`RxInfer` operates with so-called [graphical probabilistic models](https://en.wikipedia.org/wiki/Graphical_model), more specifically [factor graphs](https://en.wikipedia.org/wiki/Factor_graph). Working with graphs directly is, however, tedious and error-prone, especially for large models. To simplify the process, `RxInfer` exports the `@model` macro, which translates a textual description of a probabilistic model into a corresponding factor graph representation. ```@docs RxInfer.@model From de8d844caa2b687bbae3086f61f7722e1d962dc6 Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 15:32:10 +0200 Subject: [PATCH 07/22] We do have modularity!!!!! --- docs/src/manuals/comparison.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/manuals/comparison.md b/docs/src/manuals/comparison.md index 6e632bdf4..a45d4f443 100644 --- a/docs/src/manuals/comparison.md +++ b/docs/src/manuals/comparison.md @@ -10,7 +10,7 @@ Nowadays there's plenty of probabilistic programming languages and packages avai | Toolbox | Universality | Efficiency | Expressiveness | Debugging & Visualization | Modularity | Inference Engine | Language | Community & Ecosystem | | -------------------------------------------------------------------- | ------------ | ---------- | -------------- | ------------------------- | ---------- | ---------------- | -------- | --------------------- | -| [**RxInfer.jl**](https://rxinfer.ml/) | ~ | ✓ | ✓ | ~ | ✗ | Message-passing | Julia | ✗ | +| [**RxInfer.jl**](https://rxinfer.ml/) | ~ | ✓ | ✓ | ~ | ✓ | Message-passing | Julia | ✗ | | [**ForneyLab.jl**](https://github.com/biaslab/ForneyLab.jl) | ✗ | ~ | ✗ | ~ | ✗ | Message-passing | Julia | ✗ | | [**Infer.net**](https://dotnet.github.io/infer/) | ~ | ✓ | ✗ | ✓ | ✗ | Message-passing | C# | ✗ | | [**PGMax**](https://github.com/google-deepmind/PGMax) | ✗ | ✓ | ✗ | ✓ | ✗ | Message-passing | Python | ✗ | From 70145f53ceaf18d4f7ad01e464bdb5880e3263b2 Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 15:33:50 +0200 Subject: [PATCH 08/22] fix typo --- docs/src/manuals/comparison.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/manuals/comparison.md b/docs/src/manuals/comparison.md index a45d4f443..e12dff7c2 100644 --- a/docs/src/manuals/comparison.md +++ b/docs/src/manuals/comparison.md @@ -28,7 +28,7 @@ Nowadays there's plenty of probabilistic programming languages and packages avai **Notes**: - **Universality**: Denotes the capability to depict a vast array of probabilistic models. -- **Efficiency**: Highlights computational competence. A "–" in this context suggests perceived slowness. +- **Efficiency**: Highlights computational competence. A "~" in this context suggests perceived slowness. - **Expressiveness**: Assesses the ability to concisely formulate intricate probabilistic models. - **Debugging & Visualization**: Evaluates the suite of tools for model debugging and visualization. - **Modularity**: Reflects the potential to create models by integrating smaller models. From da02d9fdea137a4dbc5f02bd577d56736ea0cb6d Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 15:34:53 +0200 Subject: [PATCH 09/22] merge lines --- docs/src/manuals/model-specification.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/src/manuals/model-specification.md b/docs/src/manuals/model-specification.md index 909e2cc2d..b266fbd7d 100644 --- a/docs/src/manuals/model-specification.md +++ b/docs/src/manuals/model-specification.md @@ -18,8 +18,7 @@ where `model_arguments...` may include both hypeparameters and data. `model_arguments` are converted to keyword arguments. Positional arguments in the model specification are not supported. Thus it is not possible to use Julia's multiple dispatch for the model arguments. -The `@model` macro returns a regular Julia function (in this example `model_name()`) which can be executed as usual. The only difference here is that -all arguments of the model function are treated as keyword arguments. Upon calling, the model function returns a so-called model generator object, e.g: +The `@model` macro returns a regular Julia function (in this example `model_name()`) which can be executed as usual. The only difference here is that all arguments of the model function are treated as keyword arguments. Upon calling, the model function returns a so-called model generator object, e.g: ```@example model-specification-model-macro using RxInfer #hide From e9ed69d322b2f57f993b77ff0ec01a07df44aae8 Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Wed, 10 Apr 2024 16:52:05 +0200 Subject: [PATCH 10/22] proofread readme.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index bbab7341f..1db86b93b 100644 --- a/README.md +++ b/README.md @@ -135,7 +135,7 @@ Alternatively, we could use a broadcasting syntax. end ``` -As you can see, `RxInfer` offers a model specification syntax that resembles closely to the mathematical equations defined above. We use `datavar` function to create "clamped" variables that take specific values at a later date. $\theta \sim \mathrm{Beta}(2.0, 7.0)$ expression creates random variable $θ$ and assigns it as an output of $\mathrm{Beta}$ node in the corresponding FFG. +As you can see, `RxInfer` offers a model specification syntax that resembles closely to the mathematical equations defined above. The $\theta \sim \mathrm{Beta}(2.0, 7.0)$ expression creates random variable $θ$ and assigns it as an output of $\mathrm{Beta}$ node in the corresponding FFG. > [!NOTE] > `RxInfer.jl` uses `GraphPPL.jl` for model and constraints specification. `GraphPPL.jl` API has been changed in version `4.0.0`. See [Migration Guide](https://reactivebayes.github.io/GraphPPL.jl/stable/) for more details. From dd59c22d5841f032227c3b825ce6d830c28ffbe6 Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Wed, 10 Apr 2024 16:58:06 +0200 Subject: [PATCH 11/22] Proofread index.md --- docs/src/index.md | 21 +++++++-------------- 1 file changed, 7 insertions(+), 14 deletions(-) diff --git a/docs/src/index.md b/docs/src/index.md index 5be214c1e..562ac1fa9 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -27,29 +27,22 @@ Given a probabilistic model, RxInfer allows for an efficient message-passing bas ## Why RxInfer -Many important AI applications, including audio processing, self-driving vehicles, weather forecasting, and extended-reality video processing require continually solving an inference task in sophisticated probabilistic models with a large number of latent variables. -Often, the inference task in these applications must be performed continually and in real-time in response to new observations. -Popular MC-based inference methods, such as the No U-Turn Sampler (NUTS) or Hamiltonian Monte Carlo (HMC) sampling, rely on computationally heavy sampling procedures that do not scale well to probabilistic models with thousands of latent states. -Therefore, while MC-based inference is an very versatile tool, it is practically not suitable for real-time applications. -While the alternative variational inference method (VI) promises to scale better to large models than sampling-based inference, VI requires the derivation of gradients of a "variational Free Energy" cost function. -For large models, manual derivation of these gradients might not be feasible, while automated "black-box" gradient methods do not scale either because they are not capable of taking advantage of sparsity or conjugate pairs in the model. -Therefore, while Bayesian inference is known as the optimal data processing framework, in practice, real-time AI applications rely on much simpler, often ad hoc, data processing algorithms. - -RxInfer aims to remedy these issues by running efficient Bayesian inference in sophisticated probabilistic models, -taking advantage of local conjugate relationships in probabilistic models, and focusing on real-time Bayesian inference in large state-space models with thousands of latent variables. In addition, RxInfer provides a straightforward way to extend its functionality with custom factor nodes and message passing update rules. The engine is capable of running -various Bayesian inference algorithms in different parts of the factor graph of a single probabilistic model. This makes it easier -to explore different "what-if" scenarios and enables very efficient inference in specific cases. +Many important AI applications, including audio processing, self-driving vehicles, weather forecasting, and extended-reality video processing require continually solving an inference task in sophisticated probabilistic models with a large number of latent variables. Often, the inference task in these applications must be performed continually and in real-time in response to new observations. + +Popular MC-based inference methods, such as the No U-Turn Sampler (NUTS) or Hamiltonian Monte Carlo (HMC) sampling, rely on computationally heavy sampling procedures that do not scale well to probabilistic models with thousands of latent states. Therefore, while MC-based inference is an very versatile tool, it is practically not suitable for real-time applications. While the alternative variational inference method (VI) promises to scale better to large models than sampling-based inference, VI requires the derivation of gradients of a "Variational Free Energy" cost function. For large models, manual derivation of these gradients might not be feasible, while automated "black-box" gradient methods do not scale either because they are not capable of taking advantage of sparsity or conjugate pairs in the model. Therefore, while Bayesian inference is known as the optimal data processing framework, in practice, real-time AI applications rely on much simpler, often ad hoc, data processing algorithms. + +RxInfer aims to remedy these issues by running efficient Bayesian inference in sophisticated probabilistic models, taking advantage of local conjugate relationships in probabilistic models, and focusing on real-time Bayesian inference in large state-space models with thousands of latent variables. In addition, RxInfer provides a straightforward way to extend its functionality with custom factor nodes and message passing update rules. The engine is capable of running various Bayesian inference algorithms in different parts of the factor graph of a single probabilistic model. This makes it easier to explore different "what-if" scenarios and enables very efficient inference in specific cases. ## Package Features - User friendly syntax for specification of probabilistic models, achieved with [`GraphPPL`](https://github.com/ReactiveBayes/GraphPPL.jl). - Support for hybrid models combining discrete and continuous latent variables. - - Factorisation and functional form constraints specification. + - Factorization and functional form constraints specification. - Graph visualisation and extensions with different custom plugins. - Saving graph on a disk and re-loading it later on. - Automatic generation of message passing algorithms, achieved with [`ReactiveMP`](https://github.com/ReactiveBayes/ReactiveMP.jl). - Support for hybrid distinct message passing inference algorithm under a unified paradigm. - - Evaluation of Bethe free energy as a model performance measure. + - Evaluation of Bethe Free Energy as a model performance measure. - Schedule-free reactive message passing API. - Scalability for large models with millions of parameters and observations. - High performance. From 624ab21873e7e8c85a02f57be3d0c575bd34c86e Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 15:22:52 +0200 Subject: [PATCH 12/22] fix inconsistent docs in functional forms --- docs/src/library/functional-forms.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/library/functional-forms.md b/docs/src/library/functional-forms.md index aa4b8e7fc..f2170ee63 100644 --- a/docs/src/library/functional-forms.md +++ b/docs/src/library/functional-forms.md @@ -100,7 +100,7 @@ form_constraint.fixed_value = Gamma(1.0, 1.0) ## [CompositeFormConstraint](@id lib-forms-composite-constraint) -It is possible to create a composite functional form constraint with the `+` operator, e.g: +It is possible to create a composite functional form by stacking operators, e.g: ```@example constraints-functional-forms @constraints begin From dbffd77bbe2eaaf0ec11625ef01753282e734a87 Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 15:28:00 +0200 Subject: [PATCH 13/22] fix typo in model construction page --- docs/src/library/model-construction.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/library/model-construction.md b/docs/src/library/model-construction.md index 0776eebfc..5151ae3ee 100644 --- a/docs/src/library/model-construction.md +++ b/docs/src/library/model-construction.md @@ -11,7 +11,7 @@ Also read the [_Model Specification_](@ref user-guide-model-specification) guide ## [`@model` macro](@id lib-model-construction-model-macro) -`RxInfer` operates with so-called [graphical probabilistic models](https://en.wikipedia.org/wiki/Graphical_model), more specifically [factor graphs](https://en.wikipedia.org/wiki/Factor_graph). Working with graphs directly is, however, tedius and error-prone, especially for large models. To simplify the process, `RxInfer` exports the `@model` macro, which translates a textual description of a probabilistic model into a corresponding factor graph representation. +`RxInfer` operates with so-called [graphical probabilistic models](https://en.wikipedia.org/wiki/Graphical_model), more specifically [factor graphs](https://en.wikipedia.org/wiki/Factor_graph). Working with graphs directly is, however, tedious and error-prone, especially for large models. To simplify the process, `RxInfer` exports the `@model` macro, which translates a textual description of a probabilistic model into a corresponding factor graph representation. ```@docs RxInfer.@model From 41f11e44ff9535bd9ea960cb6f799abe63efac3d Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 15:32:10 +0200 Subject: [PATCH 14/22] We do have modularity!!!!! --- docs/src/manuals/comparison.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/manuals/comparison.md b/docs/src/manuals/comparison.md index 6e632bdf4..a45d4f443 100644 --- a/docs/src/manuals/comparison.md +++ b/docs/src/manuals/comparison.md @@ -10,7 +10,7 @@ Nowadays there's plenty of probabilistic programming languages and packages avai | Toolbox | Universality | Efficiency | Expressiveness | Debugging & Visualization | Modularity | Inference Engine | Language | Community & Ecosystem | | -------------------------------------------------------------------- | ------------ | ---------- | -------------- | ------------------------- | ---------- | ---------------- | -------- | --------------------- | -| [**RxInfer.jl**](https://rxinfer.ml/) | ~ | ✓ | ✓ | ~ | ✗ | Message-passing | Julia | ✗ | +| [**RxInfer.jl**](https://rxinfer.ml/) | ~ | ✓ | ✓ | ~ | ✓ | Message-passing | Julia | ✗ | | [**ForneyLab.jl**](https://github.com/biaslab/ForneyLab.jl) | ✗ | ~ | ✗ | ~ | ✗ | Message-passing | Julia | ✗ | | [**Infer.net**](https://dotnet.github.io/infer/) | ~ | ✓ | ✗ | ✓ | ✗ | Message-passing | C# | ✗ | | [**PGMax**](https://github.com/google-deepmind/PGMax) | ✗ | ✓ | ✗ | ✓ | ✗ | Message-passing | Python | ✗ | From 70b17fb8d3733527a1edc308df1598f3e8d41b5c Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 15:33:50 +0200 Subject: [PATCH 15/22] fix typo --- docs/src/manuals/comparison.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/manuals/comparison.md b/docs/src/manuals/comparison.md index a45d4f443..e12dff7c2 100644 --- a/docs/src/manuals/comparison.md +++ b/docs/src/manuals/comparison.md @@ -28,7 +28,7 @@ Nowadays there's plenty of probabilistic programming languages and packages avai **Notes**: - **Universality**: Denotes the capability to depict a vast array of probabilistic models. -- **Efficiency**: Highlights computational competence. A "–" in this context suggests perceived slowness. +- **Efficiency**: Highlights computational competence. A "~" in this context suggests perceived slowness. - **Expressiveness**: Assesses the ability to concisely formulate intricate probabilistic models. - **Debugging & Visualization**: Evaluates the suite of tools for model debugging and visualization. - **Modularity**: Reflects the potential to create models by integrating smaller models. From ca209688ee9e37026f169f0ccc712ad71c7624af Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 15:34:53 +0200 Subject: [PATCH 16/22] merge lines --- docs/src/manuals/model-specification.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/src/manuals/model-specification.md b/docs/src/manuals/model-specification.md index 909e2cc2d..b266fbd7d 100644 --- a/docs/src/manuals/model-specification.md +++ b/docs/src/manuals/model-specification.md @@ -18,8 +18,7 @@ where `model_arguments...` may include both hypeparameters and data. `model_arguments` are converted to keyword arguments. Positional arguments in the model specification are not supported. Thus it is not possible to use Julia's multiple dispatch for the model arguments. -The `@model` macro returns a regular Julia function (in this example `model_name()`) which can be executed as usual. The only difference here is that -all arguments of the model function are treated as keyword arguments. Upon calling, the model function returns a so-called model generator object, e.g: +The `@model` macro returns a regular Julia function (in this example `model_name()`) which can be executed as usual. The only difference here is that all arguments of the model function are treated as keyword arguments. Upon calling, the model function returns a so-called model generator object, e.g: ```@example model-specification-model-macro using RxInfer #hide From aef46606ade772f7f55e3e56f679e9ee61688be3 Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 15:39:34 +0200 Subject: [PATCH 17/22] merge lines --- docs/src/manuals/model-specification.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/src/manuals/model-specification.md b/docs/src/manuals/model-specification.md index b266fbd7d..497588019 100644 --- a/docs/src/manuals/model-specification.md +++ b/docs/src/manuals/model-specification.md @@ -33,8 +33,7 @@ nothing #hide ``` The model generator is not a real model (yet). For example, in the code above, we haven't specified anything for the `observation`. -The generator object allows us to iteratively add extra properties to the model, condition on data, and/or assign extra metadata information -without actually materializing the entire graph structure. Read extra information about model generator [here](@ref lib-model-construction). +The generator object allows us to iteratively add extra properties to the model, condition on data, and/or assign extra metadata information without actually materializing the entire graph structure. Read extra information about model generator [here](@ref lib-model-construction). ## A state space model example From feddaed92873dbbd55694d2dc98a0153b72c489c Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 15:45:55 +0200 Subject: [PATCH 18/22] Add docstring for init macro --- src/model/plugins/initialization_plugin.jl | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/src/model/plugins/initialization_plugin.jl b/src/model/plugins/initialization_plugin.jl index 2473b2f61..5c4f9fc59 100644 --- a/src/model/plugins/initialization_plugin.jl +++ b/src/model/plugins/initialization_plugin.jl @@ -295,6 +295,12 @@ function init_macro_interior(init_body::Expr) return init_body end +""" + @initialization + +Macro for specifying the initialization state of a model. Accepts either a function or a block of code. +Allows the specification of initial messages and marginals that can be applied to a model in the `infer` function. +""" macro initialization(init_body) return esc(RxInfer.init_macro_interior(init_body)) end From 3d2795fade1950c1191efb1f4b2e678c52738b9f Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 15:46:01 +0200 Subject: [PATCH 19/22] Make docs run --- docs/src/manuals/migration-guide-v2-v3.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/src/manuals/migration-guide-v2-v3.md b/docs/src/manuals/migration-guide-v2-v3.md index 2e6452438..a628a7b8f 100644 --- a/docs/src/manuals/migration-guide-v2-v3.md +++ b/docs/src/manuals/migration-guide-v2-v3.md @@ -25,6 +25,8 @@ infer(model = coin_toss(prior=Beta(1, 1)), Initialization of messages and marginals to kickstart the inference procedure was previously done with the `initmessages` and `initmarginals` keyword. With the introduction of a nested model specificiation in the `@model` macro, we now need a more specific way to initialize messages and marginals. This is done with the new `@initialization` macro. The syntax for the `@initialization` macro is similar to the `@constraints` and `@meta` macro. An example is shown below: ```@example migration-guide +@model function submodel() end #hide + @initialization begin # Initialize the marginal for the variable x q(x) = vague(NormalMeanVariance) From 227da3f7f3a82632063c7f59ae1862eb1f4c9214 Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 16:00:14 +0200 Subject: [PATCH 20/22] add nested model meta section --- docs/src/manuals/meta-specification.md | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/docs/src/manuals/meta-specification.md b/docs/src/manuals/meta-specification.md index 24df4767b..6b7dee1c5 100644 --- a/docs/src/manuals/meta-specification.md +++ b/docs/src/manuals/meta-specification.md @@ -170,4 +170,16 @@ println("Estimated mean for latent state `x` is ", mean(inference_result.posteri !!! warning The above example is not mathematically correct. It is only used to show how we can work with `@meta` as well as how to create a meta structure for a node in `RxInfer.jl`. -Read more about the `@meta` macro in the [official documentation](https://reactivebayes.github.io/GraphPPL.jl/stable/) of GraphPPL \ No newline at end of file +Read more about the `@meta` macro in the [official documentation](https://reactivebayes.github.io/GraphPPL.jl/stable/) of GraphPPL + +## Adding metadata to nodes in submodels + +Similarly to the `@constraints` macro, the `@meta` macro exposes syntax to push metadata to nodes in submodels. With the `for meta in submodel` syntax we can apply metadata to nodes in submodels. For example, if we use the `gaussian_model_with_meta` mnodel in a larger model, we can write: + +```@example custom-meta +custom_meta = @meta begin + for meta in gaussian_model_with_meta + NormalMeanVariance(y) -> MetaConstrainedMeanNormal(-2, 2) + end +end +``` \ No newline at end of file From 9ff939cf9e50cc0a825b10ff79f28e1151b88bbc Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 16:20:14 +0200 Subject: [PATCH 21/22] Change streamlined to streaming --- docs/src/manuals/inference/streamlined.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/src/manuals/inference/streamlined.md b/docs/src/manuals/inference/streamlined.md index 84373c42b..a6ac460fc 100644 --- a/docs/src/manuals/inference/streamlined.md +++ b/docs/src/manuals/inference/streamlined.md @@ -1,4 +1,4 @@ -# [Streamlined (online) inference](@id manual-online-inference) +# [Streaming (online) inference](@id manual-online-inference) This guide explains how to use the [`infer`](@ref) function for dynamic datasets. We'll show how `RxInfer` can continuously update beliefs asynchronously whenever a new observation arrives. We'll use a simple Beta-Bernoulli model as an example, which has been covered in the [Getting Started](@ref user-guide-getting-started) section, but keep in mind that these techniques can apply to any model. @@ -444,7 +444,7 @@ Nice, the history of the estimated posteriors aligns well with the real (hidden) ## [Callbacks](@id manual-online-inference-callbacks) The [`RxInferenceEngine`](@ref) has its own lifecycle. The callbacks differ a little bit from [Using callbacks with Static Inference](@ref manual-static-inference-callbacks). -Here are available callbacks that can be used together with the streamlined inference: +Here are available callbacks that can be used together with the streaming inference: ```@eval using RxInfer, Test, Markdown # Update the documentation below if this test does not pass @@ -522,7 +522,7 @@ nothing #hide ## [Event loop](@id manual-online-inference-event-loop) -In constrast to [`Static Inference`](@ref manual-static-inference), the streamlined version of the [`infer`](@ref) function +In constrast to [`Static Inference`](@ref manual-static-inference), the streaming version of the [`infer`](@ref) function does not provide callbacks such as `on_marginal_update`, since it is possible to subscribe directly on those updates with the `engine.posteriors` field. However, the reactive inference engine provides an ability to listen to its internal event loop, that also includes "pre" and "post" events for posterior updates. @@ -863,9 +863,9 @@ nothing #hide The `:before_stop` and `:after_stop` events are not emmited in case of the datastream completion. Use the `:on_complete` instead. -## [Using `data` keyword argument with the streamlined inference](@id manual-online-inference-data) +## [Using `data` keyword argument with streaming inference](@id manual-online-inference-data) -The streamlined version does support static datasets as well. +The streaming version does support static datasets as well. Internally, it converts it to a datastream, that emits all observations in a sequntial order without any delay. As an example: ```@example manual-online-inference From d02d7a230d569457ed090026c9ee54b330d450eb Mon Sep 17 00:00:00 2001 From: Wouter Nuijten Date: Thu, 11 Apr 2024 16:24:03 +0200 Subject: [PATCH 22/22] Merge lines --- docs/src/manuals/customization/postprocess.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/docs/src/manuals/customization/postprocess.md b/docs/src/manuals/customization/postprocess.md index 3b71afac6..fe620ffb1 100644 --- a/docs/src/manuals/customization/postprocess.md +++ b/docs/src/manuals/customization/postprocess.md @@ -1,10 +1,7 @@ # [Inference results postprocessing](@id user-guide-inference-postprocess) [`infer`](@ref) allow users to postprocess the inference result with the `postprocess = ...` keyword argument. The inference engine -operates on __wrapper__ types to distinguish between marginals and messages. By default -these wrapper types are removed from the inference results if no addons option is present. -Together with the enabled addons, however, the wrapper types are preserved in the -inference result output value. Use the options below to change this behaviour: +operates on __wrapper__ types to distinguish between marginals and messages. By default these wrapper types are removed from the inference results if no addons option is present. Together with the enabled addons, however, the wrapper types are preserved in the inference result output value. Use the options below to change this behaviour: ```@docs inference_postprocess