Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inverse problem example #27

Open
dialuser opened this issue Apr 11, 2023 · 8 comments
Open

inverse problem example #27

dialuser opened this issue Apr 11, 2023 · 8 comments

Comments

@dialuser
Copy link

Hi

Thanks for releasing the nice package. I'm interested in solving an inverse problem using deeponet. I wonder if you can provide some minimal example, for example, on the ODE problem that you already solved.

A.

@TomF98
Copy link
Contributor

TomF98 commented Apr 12, 2023

Hey,

I uploaded one example for the inverse operator learning via DeepONet to my local fork ODE example Notebook.

This example shows how to learn the inverse operator solely data driven.
It would be possible to include also a physics loss in the training of the DeepONet, but this is slightly more involved and currently only locally implemented and not publicly available. But we aim to add this in the next few days/weeks, if you would be interested in this aspect.

@dialuser
Copy link
Author

Hi @TomF98 ,

Thank you for uploading the example. Can you kindly explain what the following line does in your code,
fix_branch_input(u)

My understanding is u is the unknown variable to be inferred. So in pseduo torch, is this equivalent to

u = torch.FloatTensor(...)
u.requires_grad = True

trained_deeponet_model.eval()

Backpropagate a loss against u.

Thanks,
A

@TomF98
Copy link
Contributor

TomF98 commented Apr 13, 2023

Hey,

in the inverse problem, we are generally given some solution data and want to determine some data functions or values, such that a given differential equation is fulfilled. E.g. in the ODE example, we would have the solution $u$ evaluated at some points and want to determine a rhs $f$ such that $\partial_t u(t) = f(t)$, for all $t \in [0, 1]$.
So $u$ is known (generally only on a discrete point set) and we want to find unknown $f$.

Therefore, we now have the solution $u$ and use it (evaluated at some discrete points ) as the input for our branchnet in the DeepONet. Let say we have data of the form ${t_i, u_j, f_j(t_i)}$, where $t_i$ are some points in time and $u_j, f_j$ a combination of solution and rhs of the ODE. The DeepONet should fulfill:
$\text{DeepONet applied to }(t_i, u_j) = f_j(t_i)$. This equality that is trained with a standard fitting procedure.

Once the training is finished, we want to evaluate our DeepONet. And only here the method .fix_branch_input(u) is used.
This method just is meant for a more efficient way of evaluating at the end, and saves the branch output for a given $u$. For example, evaluating the DeepONet twice with the same $u$ could be done with:

model(t_0, u) # first evaluation returns f(t_0)
model(t_1, u) # second evaluation returns f(t_1)

But this would internally evaluated the branch net twice, once in each call. But since $u$ is the same the branch output stays the same too. Therefore one can call .fix_branch_input(u) to only evaluated the branch once and the plug the time points in the model:

model..fix_branch_input(u) # evaluate branch and save output
model(t_0) # only evaluate trunknet and use previous output to return f(t_0)
model(t_1) # second evaluation returns f(t_1)

This is also explained in the docs.

Hope that helps you.

@dialuser
Copy link
Author

Hi @TomF98

Thank you for the explanation. That helps.

Alex

@dialuser dialuser reopened this May 3, 2023
@dialuser
Copy link
Author

dialuser commented May 3, 2023

Hi @TomF98,

If you recall I tried to perform an inversion problem. My problem has 10 parameters. In your example, it's a scalar parameter. I tried to use Rn space to define the parameter, but there doesn't seem to be a corresponding domain class for Rn(?). You have 1D, 2D, and 3D only. Look forward to hearing from you.

Thanks,
A

@TomF98
Copy link
Contributor

TomF98 commented May 4, 2023

Hi @dialuser,
Should your output or input be of dimension 10? I think, in the case of an inverse problem you don't need to define a domain, since all your data points a given through measurements (externally)?

But in the case that you still need to define a domain of dimension 10 or higher, this is possible through domains of smaller dimension. In TorchPhysics the domains can be connected over the Cartesian product to get higher dimensional objects. For example:

I1 = tp.domains.Interval(X, 0, 1) # Interval in space X
I2 = tp.domains.Interval(Y, 0, 1) # Interval in space Y
S = I1 * I2 # Square in space X*Y, now of dimension 2

I3 = tp.domains.Interval(Z, 0, 1) # another interval
C = S * I3 # Cube in space X*Y*Z, dimension = 3

I4 = tp.domains.Interval(W, 0, 1)
H = C * I4 # Cube in dimension 4
# and so on....

This above works for all implemented domains (circle, triangle, ....) and should enable you to construct dimensions of arbitrary dimension.

@dialuser
Copy link
Author

dialuser commented May 5, 2023

Hi @TomF98 ,

Thanks for your clarifications. To your question "Should your output or input be of dimension 10?" the input is of dimension 10 and the dimension of the output is basically infinite. I'm trying to first learn a mapping from input->output and then find the solution. This is kind of different from the initial advice you gave (i.e., backprop the loss function to find the parameters directly). I'm curious which one gives "better" solution.

@Kangyukuan
Copy link

“from fdm_heat_equation import FDM, transform_to_points " There is the following error: ModuleNotFoundError: No module named 'fdm_heat_equation'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants