Skip to content

Commit

Permalink
Time-Marching English suggestions
Browse files Browse the repository at this point in the history
  • Loading branch information
AnnePicus authored and TaliCohn committed Feb 19, 2025
1 parent 1d30f09 commit 8168905
Showing 1 changed file with 58 additions and 60 deletions.
118 changes: 58 additions & 60 deletions algorithms/differential_equations/time_marching/time_marching.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,30 +5,28 @@
"id": "0ab83a69-09e2-4df1-a7ee-3a5b45e590cf",
"metadata": {},
"source": [
"# Time-Marching based Quantum Solvers for Time-dependent Linear Differential Equations"
"# Time Marching Based Quantum Solvers for Time-dependent Linear Differential Equations"
]
},
{
"cell_type": "markdown",
"id": "b94080f2-e474-462d-8b54-b107026f4bca",
"metadata": {},
"source": [
"## Introduction\n",
"This demonstration is based on the [[1](#TimeMarching)] paper. The notebook was written in collaboration with Prof. Di Fang, the first author of the paper.\n",
"\n",
"In the following demonstration we will follow the paper [[1](#TimeMarching)]. The notebook was written in collaboration with Prof. Di Fang, the first author of the paper.\n",
"Time marching is a method for solving differential equations in time by integrating the solution vector through time in small discrete steps, where each timestep depends on previous timesteps. This paper applies an evolution matrix sequentially on the state and makes it evolve through time, as done in time-dependent Hamiltonian simulations. \n",
"\n",
"Time marching is a method in which in order to solve differential equations in time, the solution vector is integrated through time by small discrete steps, where each timestep depends on some previous timesteps. In this paper, an evolution matrix is applied sequentially on the state and make it to evolve through time, as done in time-dependent hamiltonian simulation. \n",
"## Defining the Problem\n",
"\n",
"Now let's deffine the problem first:\n",
"* **Input:** a system of linear homogenous linear equations (ODEs):\n",
"$$\\frac{d}{dt} |\\psi(t)\\rangle = A(t) |\\psi(t)\\rangle, \\quad |\\psi(0)\\rangle = |\\psi_0\\rangle$$ Note that $A$ can vary in time. We assume that the matrix $A$ is with bounded variation. The input model of $A(t)$ is a series of time-dependent block-encodings, described next.\n",
"\n",
"* **Input:** a system of linear homogenous linear equations (ODEs)\n",
"$$\\frac{d}{dt} |\\psi(t)\\rangle = A(t) |\\psi(t)\\rangle, \\quad |\\psi(0)\\rangle = |\\psi_0\\rangle$$ Notice that $A$ can vary on time. The matrix $A$ is assumed to be with bounded variation. The input model of $A(t)$ is a series of time dependent block encodings, that will be described next.\n",
"* **Output:** a state that is proportional to the solution at time $T$, $|\\psi(T)\\rangle$.\n",
"\n",
"* **Output:** a state that is proportional to the solution at time $T$, $|\\psi(T)\\rangle$\n",
"## Describing the Algorithm\n",
"\n",
"### Algorithm Description\n",
"\n",
"The algorithm divides the timline into long timesteps and short timesteps. In each long timestep, some approximation of evolution of the short timesteps is done, such as Truncated Dyson series [[2](#Dyson)] or Magnus series [[3](#Magnus)]. These are applied as block-encodings on the state, where the following matrix is block encoded in each long timestep:\n",
"The algorithm divides the timeline into long timesteps and short timesteps. In each long timestep, some approximation of evolution of the short timesteps is done, such as the Truncated Dyson series [[2](#Dyson)] or Magnus series [[3](#Magnus)]. These are applied as block-encodings on the state, where the following matrix is block-encoded in each long timestep:\n",
"$$\n",
"\\mathcal{\\Xi_l} = \\mathcal{T} e^{\\int_{t_{l-1}}^{t_l} A(t) \\, dt}\n",
"$$"
Expand All @@ -52,17 +50,17 @@
"id": "d9a996b5-02b1-49e4-88ca-6ea425caf354",
"metadata": {},
"source": [
"The problem is that when this block encoding has some prefactor $s$ (for example because of the usage of some LCU to block encode the integration), the prefactor of the entire simulation is amplified by $s$ on each iteration! so that the probability to sample the wanted block decreases exponentially with the number of long timesteps.\n",
"The problem is that when this block-encoding has some prefactor $s$ (for example, because some LCU is block-encoding the integration), the prefactor of the entire simulation is amplified by $s$ on each iteration. This means that the probability to sample the wanted block decreases exponentially with the number of long timesteps.\n",
"\n",
"This is the main pain-point that the algorithm in the paper is coming to resolve. In the case of hamiltonian simulation, it is possible to wrap each timestep with oblivious amplitdue amplification [[4](#OAA)] (see [oblivious amplitude amplification](https://github.com/Classiq/classiq-library/blob/main/algorithms/oblivious_amplitude_amplification/oblivious_amplitude_amplification.ipynb)), and get rid of the pre-factor. However it is only possible in the case of a unitary block encoding. The authors address this issue by using instead Uniform singular amplitude amplification [[5](#USVA)], which is done with the qsvt framework."
"This is the main pain-point that the algorithm in the paper resolves. In the case of Hamiltonian simulation, it is possible to wrap each timestep with oblivious amplitude amplification [[4](#OAA)] (see [oblivious amplitude amplification](https://github.com/Classiq/classiq-library/blob/main/algorithms/oblivious_amplitude_amplification/oblivious_amplitude_amplification.ipynb)) and get rid of the prefactor. However, it is only possible in the case of a unitary block-encoding. The authors address this issue by using uniform singular amplitude amplification [[5](#USVA)] instead, within the QSVT framework."
]
},
{
"cell_type": "markdown",
"id": "89ccefce-4ec8-43eb-ba74-0c16291f612f",
"metadata": {},
"source": [
"## Algorithm Implementation using Classiq"
"## Implementing the Algorithm Using Classiq"
]
},
{
Expand All @@ -83,16 +81,16 @@
"id": "74f3ef52-7d3e-4808-b69c-a94e613790b8",
"metadata": {},
"source": [
"We choose an easy artificial example to demonstrate the algorithm. For simplicity, we choose $A$ which is easy to block-encode. The following matrix can be easily block-encoded using linear pauli rotations:\n",
"We choose an easy artificial example to demonstrate the algorithm. For simplicity, we choose $A$, which is easy to block-encode. The following matrix can be easily block-encoded using linear Pauli rotations:\n",
"$$\n",
"A_{ij}(t) = \\cos(i+t)\\delta_{ij}\n",
"$$\n",
"\n",
"The matrix is hermitian and diagonal, and it will help us in several aspect:\n",
"1. The 1st order Magnus expansion will be exact.\n",
"2. The QSVT and QET (Quantum eigenvalue transform) will coincide, and we will use it to exponentiate the block encoding.\n",
"The matrix is Hermitian and diagonal, and it helps us in several aspects:\n",
"1. The first-order Magnus expansion will be exact.\n",
"2. The QSVT and QET (quantum eigenvalue transform) will coincide, and we use it to exponentiate the block-encoding.\n",
"\n",
"We will simulate a 4x4 matrix using 4 timeteps, from $t=0$ to $t=2$:"
"We simulate a 4x4 matrix using four timesteps, from $t=0$ to $t=2$:"
]
},
{
Expand Down Expand Up @@ -188,22 +186,22 @@
"id": "73eb6ee5-63e7-41bf-86ca-0df53928a4f2",
"metadata": {},
"source": [
"### Time-dependent block encoding\n",
"### Time-dependent Block-encoding\n",
"\n",
"The time-dependent block encoding of $A(t)$ will be:\n",
"The time-dependent block-encoding of $A(t)$:\n",
"$$\n",
"\\left( I_{n_q} \\otimes \\langle 0_m | \\otimes I_n \\right) \n",
"U_{A(t)} \n",
"\\left( I_{n_q} \\otimes | 0_m \\rangle \\otimes I_n \\right) \n",
"= \\sum_{i=0}^{2^{n_q}-1} | i \\rangle \\langle i | \\frac{A\\left((b-a)\\frac{i}{{2^{n_q}}}+a\\right)}{\\alpha}\n",
"$$\n",
"\n",
"So we get, for a given timeslice:\n",
"For a given timeslice, we get this:\n",
"$$\n",
"A_{ij}(t, a, b) = \\cos((b-a)\\frac{t}{2^{n_q}} + a + i)\\delta_{ij}\n",
"$$\n",
"\n",
"this can be easily accomplished be a sequence of 2 pauli rotations."
"We accomplish this easily with a sequence of two Pauli rotations:"
]
},
{
Expand Down Expand Up @@ -238,22 +236,22 @@
"id": "c375c120-3fc7-406a-a4ef-c12091675e7a",
"metadata": {},
"source": [
"### Short-time Evolution\n",
"### Short Time Evolution\n",
"\n",
"We will use a 1st order Magnus expansion, which is exact in this case. \n",
"We use a first-order Magnus expansion, which is exact in this case:\n",
"$$\n",
"\\overline{\\Xi} = e^{\\frac{b-a}{M}} \\sum_{k=0}^{M-1} A\\left(a + k \\frac{b-a}{M}\\right)\n",
"$$\n",
"It will be built in 2 steps:\n",
"#### 1. Riemannian Summation of short timesteps"
"It is built in two steps.\n",
"#### 1. Riemannian Summation of Short Timesteps"
]
},
{
"cell_type": "markdown",
"id": "0e31d376-692b-4b89-8708-e9743dc08ddd",
"metadata": {},
"source": [
"Wrapping the time variable with the haddamard transform, we get exactly a block encoding of the Reimann sum of the input block encoding."
"By wrapping the time variable with the Hadamard transform, we get an exact block-encoding of the Reimann sum of the input block-encoding."
]
},
{
Expand Down Expand Up @@ -305,25 +303,25 @@
"id": "d225e83e-948c-41e2-ba7c-d5638779a640",
"metadata": {},
"source": [
"#### 2. Block encoding of the summation Exponential"
"#### 2. Block-Encoding of the Summation Exponential"
]
},
{
"cell_type": "markdown",
"id": "f8698cc1-945c-4d12-9219-061c39195117",
"metadata": {},
"source": [
"Find polynomials for $\\cosh(ax)$ and $\\sinh(ax)$, In order to combine them to $e^{ax}$.\n",
"We want to find polynomials for $\\cosh(ax)$ and $\\sinh(ax)$, to combine them into $e^{ax}$.\n",
"\n",
"For pedagogical reasons, we will do a naive thing and create a polynomial approximation for each one of the odd&even polynomials of: $P_{cosh} \\approx \\frac{\\cosh(ax)}{e^a}$ and $P_{sinh} \\approx \\frac{\\sinh(ax)}{e^a}$.\n",
"For pedagogical reasons, we work naively and create a polynomial approximation for each of the odd and even polynomials of $P_{cosh} \\approx \\frac{\\cosh(ax)}{e^a}$ and $P_{sinh} \\approx \\frac{\\sinh(ax)}{e^a}$.\n",
"\n",
"Then combining them with LCU will result with:\n",
"$$P(x) \\approx \\frac{e^{ax}}{2e^a}$$ which is a polynomial that is bounded by $\\frac{1}{2}$.\n",
"Combining them with LCU gives these results:\n",
"$$P(x) \\approx \\frac{e^{ax}}{2e^a},$$ which is a polynomial bounded by $\\frac{1}{2}$.\n",
"\n",
"We could choose $P_{cosh} \\approx \\frac{\\cosh(ax)}{\\cosh(a)}$ and $P_{sinh} \\approx \\frac{\\sinh(ax)}{\\sinh{a}}$,\n",
"Then LCU with coeffiecients $[\\frac{\\cosh(a)}{\\cosh(a)+\\sinh(a)}, \\frac{\\sinh(a)}{\\cosh(a)+\\sinh(a)}]$, will get us to:\n",
"$$P(x) \\approx \\frac{e^{ax}}{e^a}$$\n",
"Which is the best we can get to, and doesn't require amplification. We will still go with the first approach, for demonstrating the singular value amplification. We will want to get rid of this redundant factor 2, which will save us a multiplicative factor of $O(2^T)$ in the success probability."
"We could choose $P_{cosh} \\approx \\frac{\\cosh(ax)}{\\cosh(a)}$ and $P_{sinh} \\approx \\frac{\\sinh(ax)}{\\sinh{a}}$.\n",
"Then, LCU with coefficients $[\\frac{\\cosh(a)}{\\cosh(a)+\\sinh(a)}, \\frac{\\sinh(a)}{\\cosh(a)+\\sinh(a)}]$ gives us this:\n",
"$$P(x) \\approx \\frac{e^{ax}}{e^a},$$\n",
"which is the best we can get, and doesn't require amplification. We go with the first approach for demonstrating the singular value amplification. Getting rid of this redundant factor 2 can save us a multiplicative factor of $O(2^T)$ in the success probability."
]
},
{
Expand Down Expand Up @@ -394,7 +392,7 @@
"id": "9391aa9e-1db9-45f1-ac34-8ea17ef092a9",
"metadata": {},
"source": [
"Then we transform the polynomials to qsvt phases using the `pyqsp` package:"
"We transform the polynomials to QSVT phases using the `pyqsp` package:"
]
},
{
Expand Down Expand Up @@ -465,7 +463,7 @@
"id": "9aa6ef81-48f0-4357-988d-0cb74a7d84fd",
"metadata": {},
"source": [
"Lastly, we use the phases within the `qsvt_lcu` function, which is an optimized function for implementing a linear combination of 2 qsvt sequences:"
"Lastly, we use the phases in the `qsvt_lcu` function, which is optimized for implementing a linear combination of two QSVT sequences:"
]
},
{
Expand Down Expand Up @@ -521,50 +519,50 @@
"id": "04925111-04af-401b-8d10-b48b457442cc",
"metadata": {},
"source": [
"### Amplification of a single long timesteps"
"### Amplification of a Single Long Timestep"
]
},
{
"cell_type": "markdown",
"id": "f9c344fa-d1bf-4a1f-a7e6-466e1184822b",
"metadata": {},
"source": [
"As the climax of the algorithm, we wrap the magnus evolution by an amplification step. The prefactor of the exponential block-encoding is 2, so we want to approximate the function $f(x)=2x$ in the interval $[0, \\frac{1}{2}]$."
"At the climax of the algorithm, we wrap the Magnus evolution in an amplification step. The prefactor of the exponential block-encoding is 2, so we want to approximate the function $f(x)=2x$ in the interval $[0, \\frac{1}{2}]$."
]
},
{
"cell_type": "markdown",
"id": "d2134523-b7be-4e06-9ad8-3099a19b8bdd",
"metadata": {},
"source": [
"#### singular value amplification ($\\gamma x$)\n",
"#### Singular Value Amplification ($\\gamma x$)\n",
"\n",
"We follow the paper's approach, and do a minmax optimization:\n",
"Complete to an odd function. To approximate an odd target function using an odd polynomial of degree $ d $, the target function can be expressed as:\n",
"We follow the paper's approach, and do a min-max optimization,\n",
"completing to an odd function. To approximate an odd target function using an odd polynomial of degree $ d $, we express the target function as\n",
"$$\n",
"F(x) = \\sum_{k=0}^{(d-1)/2} c_k T_{2k+1}(x),\n",
"$$\n",
"where $ T_{2k+1}(x) $ are Chebyshev polynomials of odd degrees, and $ c_k $ are unknown coefficients.\n",
"where $ T_{2k+1}(x) $ are Chebyshev polynomials of odd degrees and $ c_k $ are unknown coefficients.\n",
"\n",
"To formulate this as a discrete optimization problem, we discretize $[-1, 1]$ using $ M $ grid points:\n",
"$$\n",
"x_j = -\\cos\\left(\\frac{j \\pi}{M-1}\\right), \\quad j = 0, \\ldots, M-1.\n",
"$$\n",
"\n",
"The coefficient matrix is defined as:\n",
"We define the coefficient matrix:\n",
"$$\n",
"A_{jk} = T_{2k+1}(x_j), \\quad k = 0, \\ldots, \\frac{d-1}{2}.\n",
"$$\n",
"\n",
"The coefficients can then be found by solving the following convex optimization problem:\n",
"We find the coefficients by solving this convex optimization problem:\n",
"$$\n",
"\\min_{\\{c_k\\}} \\left( \\max_{x_j \\in [0, 1]} \\left| F(x_j) - (1 - \\epsilon)x_j \\right| \\right),\n",
"$$\n",
"subject to:\n",
"subject to\n",
"$$\n",
"F(x_j) = \\sum_k A_{jk} c_k, \\quad |F(x_j)| \\leq c, \\quad \\forall j = 0, \\ldots, M-1.\n",
"$$\n",
"Here, $ c $ is a relaxation parameter, chosen as:\n",
"Here, $ c $ is a relaxation parameter, chosen as\n",
"$$\n",
"c = \\max(0.9999, 1 - 0.1\\epsilon).\n",
"$$\n"
Expand Down Expand Up @@ -721,7 +719,7 @@
"id": "dd0f514b-294e-448d-a258-cee175edfa84",
"metadata": {},
"source": [
"Then apply the phases on the magnus block encoding:"
"Then we apply the phases on the Magnus block-encoding:"
]
},
{
Expand Down Expand Up @@ -758,9 +756,9 @@
"id": "dfe76a47-58bb-4df1-a5e5-ba5567859afa",
"metadata": {},
"source": [
"### Long time Evolution\n",
"### Long Time Evolution\n",
"\n",
"Last thing to do is to sequentially apply the block encodings of each timeslice. In order to have a quantum variable that will be $|0\\rangle$ when all the block encodings are applied to the state, a counter is used. A further amplitude amplification step is possible using the counter, however we won't do it here."
"Lastly, we sequentially apply the block-encodings of each timeslice. To have a quantum variable that is $|0\\rangle$ when all the block encodings are applied to the state, we use a counter. A further amplitude amplification step is possible using the counter; however, we do not do it here."
]
},
{
Expand Down Expand Up @@ -853,9 +851,9 @@
"id": "718d462f-d80a-49d7-8478-0847c733c1e9",
"metadata": {},
"source": [
"### Compare to the naive case: without uniform amplification\n",
"### Comparing to the Naive Case: Without Uniform Amplification\n",
"\n",
"Here we don't use the amplification step. We should see that the measured amplitudes will be much smaller than in the amplified case."
"Here we do not use the amplification step. We see that the measured amplitudes are much smaller than in the amplified case."
]
},
{
Expand Down Expand Up @@ -991,15 +989,15 @@
"id": "10776c12-6069-46b9-920d-34f0cb5f552a",
"metadata": {},
"source": [
"### Compare classical and quantum results at the final step:"
"### Comparing Classical and Quantum Results"
]
},
{
"cell_type": "markdown",
"id": "2f788f9c-dd78-4c95-9265-4feb600c6e19",
"metadata": {},
"source": [
"Verify that the classical and quantum solutions are equivalent:"
"In this final step, we verify that the classical and quantum solutions are equivalent:"
]
},
{
Expand Down Expand Up @@ -1030,12 +1028,12 @@
"<a id='TimeMarching'>[1]</a>: [Fang, Di, Lin, Lin, and Tong, Yu. Time-marching based quantum solvers for time-dependent linear differential equations. Quantum 7, 955 (2023).](https://doi.org/10.22331/q-2023-03-20-955)\n",
"\n",
"<a id='Dyson'>[2]</a>: [M. Kieferov a, A. Scherer, and D. W. Berry. Simulating the dynamics of timedependent\n",
"Hamiltonians with a truncated Dyson series. Phys. Rev. A, 99(4), apr\n",
"2019](https://arxiv.org/abs/1805.00582)\n",
"Hamiltonians with a truncated Dyson series. Phys. Rev. A, 99(4), Apr\n",
"2019](https://arxiv.org/abs/1805.00582).\n",
"\n",
"<a id='Magnus'>[3]</a>: [Magnus Expansion (Wikipedia)](https://en.wikipedia.org/wiki/Magnus_expansion)\n",
"<a id='Magnus'>[3]</a>: [Magnus Expansion (Wikipedia)](https://en.wikipedia.org/wiki/Magnus_expansion).\n",
"\n",
"<a name='OAA'>[4]</a>: [Berry, Dominic W., et al. \"Exponential improvement in precision for simulating sparse Hamiltonians.\" Proceedings of the forty-sixth annual ACM symposium on Theory of computing. 2014.](https://dl.acm.org/doi/abs/10.1145/2591796.2591854)\n",
"<a name='OAA'>[4]</a>: [Berry, Dominic W., et al. Exponential improvement in precision for simulating sparse Hamiltonians. Proceedings of the forty-sixth annual ACM symposium on Theory of Computing. 2014.](https://dl.acm.org/doi/abs/10.1145/2591796.2591854)\n",
"\n",
"\n",
"<a name='USVA'>[5]</a>: [A. Gily en, Y. Su, G. H. Low, and N. Wiebe. Quantum singular value transformation and\n",
Expand Down

0 comments on commit 8168905

Please sign in to comment.