100% found this document useful (1 vote)
172 views18 pages

Physics-Informed Deep Learning

Uploaded by

vane-16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
172 views18 pages

Physics-Informed Deep Learning

Uploaded by

vane-16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

CIS522: Deep learning

Physics-informed deep learning

Paris Perdikaris
April 7, 2020
https://github.com/PredictiveIntelligenceLab/USNCCM15-Short-Course-Recent-Advances-in-
Physics-Informed-Deep-Learning
ML

Hypothesis:
Challenges:
Data

• Robust design/control
• Large parameter spaces

• Uncertainty quantification
• High cost of data acquisition
sha1_base64="VgbQGL+ueBIfLkPR9lh4eDChtBQ=">AAACM3icdVDLSgNBEJz1Gd9Rj14Gg5Bcwq4aH3gJ6sFjBJMI2RBmJxMzZHZ3mOkVwiYf40f4DV71KJ4Ur/6Ds9mIRrSgoajqprvLk4JrsO1na2p6ZnZuPrOwuLS8srqWXd+o6TBSlFVpKEJ17RHNBA9YFTgIdi0VI74nWN3rnSV+/ZYpzcPgCvqSNX1yE/AOpwSM1MqeyLwLXQZk4PoEupSI+HxYcKUKJYTYmN/qIG0sfE0UWtmcXXR2D/YOHZyS49KYHJSwU7RHyKExKq3sq9sOaeSzAKggWjccW0IzJgo4FWy46EaaSUJ75IY1DA2Iz3QzHj05xDtGaeNOqEwFgEfqz4mY+Fr3fc90Jjfr314i/uU1IugcNWMeyAhYQNNFnUhg83+SGG5zxSiIviGEKm5uxbRLFKFgcp3Y4vlDk8nX4/h/UtstOiaoy/1c+XScTgZtoW2URw46RGV0gSqoiii6Qw/oET1Z99aL9Wa9p61T1nhmE03A+vgER96saQ==</latexit>
sha1_base64="VgbQGL+ueBIfLkPR9lh4eDChtBQ=">AAACM3icdVDLSgNBEJz1Gd9Rj14Gg5Bcwq4aH3gJ6sFjBJMI2RBmJxMzZHZ3mOkVwiYf40f4DV71KJ4Ur/6Ds9mIRrSgoajqprvLk4JrsO1na2p6ZnZuPrOwuLS8srqWXd+o6TBSlFVpKEJ17RHNBA9YFTgIdi0VI74nWN3rnSV+/ZYpzcPgCvqSNX1yE/AOpwSM1MqeyLwLXQZk4PoEupSI+HxYcKUKJYTYmN/qIG0sfE0UWtmcXXR2D/YOHZyS49KYHJSwU7RHyKExKq3sq9sOaeSzAKggWjccW0IzJgo4FWy46EaaSUJ75IY1DA2Iz3QzHj05xDtGaeNOqEwFgEfqz4mY+Fr3fc90Jjfr314i/uU1IugcNWMeyAhYQNNFnUhg83+SGG5zxSiIviGEKm5uxbRLFKFgcp3Y4vlDk8nX4/h/UtstOiaoy/1c+XScTgZtoW2URw46RGV0gSqoiii6Qw/oET1Z99aL9Wa9p61T1nhmE03A+vgER96saQ==</latexit><latexit
sha1_base64="VgbQGL+ueBIfLkPR9lh4eDChtBQ=">AAACM3icdVDLSgNBEJz1Gd9Rj14Gg5Bcwq4aH3gJ6sFjBJMI2RBmJxMzZHZ3mOkVwiYf40f4DV71KJ4Ur/6Ds9mIRrSgoajqprvLk4JrsO1na2p6ZnZuPrOwuLS8srqWXd+o6TBSlFVpKEJ17RHNBA9YFTgIdi0VI74nWN3rnSV+/ZYpzcPgCvqSNX1yE/AOpwSM1MqeyLwLXQZk4PoEupSI+HxYcKUKJYTYmN/qIG0sfE0UWtmcXXR2D/YOHZyS49KYHJSwU7RHyKExKq3sq9sOaeSzAKggWjccW0IzJgo4FWy46EaaSUJ75IY1DA2Iz3QzHj05xDtGaeNOqEwFgEfqz4mY+Fr3fc90Jjfr314i/uU1IugcNWMeyAhYQNNFnUhg83+SGG5zxSiIviGEKm5uxbRLFKFgcp3Y4vlDk8nX4/h/UtstOiaoy/1c+XScTgZtoW2URw46RGV0gSqoiii6Qw/oET1Z99aL9Wa9p61T1nhmE03A+vgER96saQ==</latexit><latexit
sha1_base64="VgbQGL+ueBIfLkPR9lh4eDChtBQ=">AAACM3icdVDLSgNBEJz1Gd9Rj14Gg5Bcwq4aH3gJ6sFjBJMI2RBmJxMzZHZ3mOkVwiYf40f4DV71KJ4Ur/6Ds9mIRrSgoajqprvLk4JrsO1na2p6ZnZuPrOwuLS8srqWXd+o6TBSlFVpKEJ17RHNBA9YFTgIdi0VI74nWN3rnSV+/ZYpzcPgCvqSNX1yE/AOpwSM1MqeyLwLXQZk4PoEupSI+HxYcKUKJYTYmN/qIG0sfE0UWtmcXXR2D/YOHZyS49KYHJSwU7RHyKExKq3sq9sOaeSzAKggWjccW0IzJgo4FWy46EaaSUJ75IY1DA2Iz3QzHj05xDtGaeNOqEwFgEfqz4mY+Fr3fc90Jjfr314i/uU1IugcNWMeyAhYQNNFnUhg83+SGG5zxSiIviGEKm5uxbRLFKFgcp3Y4vlDk8nX4/h/UtstOiaoy/1c+XScTgZtoW2URw46RGV0gSqoiii6Qw/oET1Z99aL9Wa9p61T1nhmE03A+vgER96saQ==</latexit><latexit
<latexit
• Limited and high-dimensional data
p(✓|D) / p(D|✓)p(✓)
Motivation and open challenges

Prior knowledge
Goal: Predictive modeling, analysis and optimization of complex systems

• Incomplete models, imperfect data (e.g., missing data, outliers, complex noise processes)
• Multiple tasks and data modalities (e.g. images, time-series, scattered measurements, etc.)
CSE

• Can we bridge knowledge from scientific computing and machine learning to tackle these challenges?
output layer: N (x) = W N (x) + b 2 R ;

see also a visualization of a neural network in Figure 1. Commonly used activation


Physics of AI: Two schools of thought
functions include the logistic sigmoid 1/(1 + e x ), the hyperbolic tangent (tanh), and
the rectified linear unit (ReLU, max{x, 0}).
2.2. Physics-informed neural networks for solving PDEs. We consider
the following PDE parameterized by for the solution u(x) with x = (x1 , . . . , xd )
1. Physics is implicitly defined on a domain ⌦ ⇢ Rd :
1

baked in specialized ✓ ◆ 3 2

@u @u @2u @2u
neural architectures with (2.1) f x;
@x1
,..., ;
@xd @x1 @x1
,...,
@x1 @xd
;...; = 0, x 2 ⌦, 4 5 6

strong inductive biases 1

with suitable boundary conditions


(e.g. invariance to simple 3 2

group symmetries). B(u, x) = 0 4 on @⌦,


5 7 6

F IGURE 5. Top left: At level ` = 1 n3 aggregates information from {n4 , n5 } and


*figures
where from
B(u, x) Kondor,
could beR., Dirichlet,
Son, H.T., Pan, H., Anderson,
Neumann, Robin,
n2 aggregatesB.,information
&orTrivedi,{n5 , nS.
periodic 6 }. (2018).
At `boundary Covariant
= 2, n1 collects conditions.
this summary informa-
tion from n3 and n2 . Bottom left: This graph is not isomorphic to the top one,
compositional networks
For time-dependent for learning
problems, graphs. arXiv
we consider buttime tpreprint
asof na3 and
the activations arXiv:1801.02144.
special
n2 at ` = 1component of x,at ` and
will be identical. Therefore, = 2, n1
will get the same inputs from its neighbors, irrespective of whether or not n5 and
⌦ contains the temporal domain. The initial n7 are thecondition
same node or not.canRight: be simply
Aggregation treated
at different levels. Foras a
keeping
the figure legible only the neighborhood around one node in higher levels is
special type of Dirichlet boundary condition on the spatio-temporal domain.
marked.

Proposition 3. If for any ⇡ 2 Sm , the f 7! R⇡ (f ) map appearing in Definition 6 is linear, then th


corresponding {R⇡ }⇡2Sm matrices form a representation of Sm .
PDE( )
The representation theory of symmetric groups is a rich subject that goes beyond the scope of th
present paper (Sagan, 2001). However, there is one particular representation of Sm that is likel
@ familiar even to non-algebraists, the so-called defining representation, given by the P⇡ 2 Rn⇥n

NN(x, t; ✓) @t permutation matrices ⇢f T


@ 2 û 1 if ⇡(j) = i
2. Physics is explicitly @ û
@t @x 2
[P⇡ ]i,j =
0 otherwise.
@2 It is easy to verify that P⇡2 ⇡1 = P⇡2 P⇡1 for any ⇡1 ,⇡2 2 Sm , so {P⇡ }⇡2Sm is indeed a representa
imposed by constraining @x2 tion of Sm . If the transformation rules of the fi activations in a given comp-net are dictated by thi
representation, then each fi must necessarily be a |Pi | dimensional vector, and intuitively each com
the output of conventional x ponent of fi carries information related to one specific atom in the receptive field, or the interactio
of that specific atom with all the others. We call this case first order permutation covariance.
û Minimize
neural architectures with t
..
.
..
. I
Definition 7. We say that ni is a first order covariant node in a comp-net if under ⇤
û(x, t) g (x, t)
of its receptive field Pi byD Loss ✓
any ⇡ 2 S|Pi| , its activation trasforms as fi 7! P⇡ fi .
the permutatio

weak inductive biases. 4.2. S ECOND ORDER COVARIANT COMP - NETS


Tb
@ @ û
Psichogios & Ungar, 1992 @n (x, t) g (u, x, t)
It is easy to verify that given
R any representation (Rg )g2G of a group G, the matrices (Rg ⌦ Rg )g2G
also@nfurnish a representation of G. Thus, one step up in the hierarchy from P⇡ –covariant comp-net
2
Lagaris et. al., 1998 are P⇡ ⌦ P⇡ –covariant comp-nets, where the fi feature vectors are now |Pi | dimensional vector
that transform under permutations of the internal ordering by ⇡ as fi 7! (P⇡ ⌦ P⇡ )fi .
Raissi et. al., 2019 BC & IC
Nu fi into a matrix Fi 2 R|Pi |⇥|Pi | , then the action
1 X
If we reshape
Lu et. al., 2019 F 2 7! P F 2 P
1 >

Zhu et. al., 2019 Fig. 1. Schematic of a PINN L(✓) := isthe


for solving to[u
di↵usion
equivalent P ⌦P fon✓ f(x
i equation
acting
@u
. In ithe)]
= + @
following,
u
wewith
will prefer R[f
mixed ✓ (x)]
boundary
this more intuitive matrix view
i ⇡ i ⇡

N
conditions (BC) u(x, t) = gD (x, t) on D ⇢ the
since
u it clearly
@⌦different
and
expresses
@u that feature
@t
vectors
(x, t)of=thegreceptive
that
@x2
transform this

way express
| @⌦.that{z relationships betwee
} A# as th
⇡ ⇡ i

i=1constituents R (u, x, t) on
field. Note, inRparticular, The
if we initial
define
| restriction of
@n
the {z
adjacency matrix
condition (IC) is treated as a special type of boundary conditions. Tf and Tb Physics to P (i.e., }if P = denote the
(e , . . . , e ) thentwo sets Aof ), the
[A# ]
regularization = i i p1 pm Pi a,b
Pi
pa ,pb
A#Pi transforms exactly as Fi does in the equation above.
residual points for the equation and BC/IC. Data fit 8
sha1_base64="1HGZtmjbtzdRdyLbi+i+HT/d8uc=">AAADuHicbVJbb9MwFE5WLqPcNnjkxWJC2kSpknbAmFQxDSR4QKhM7CI1WXCck9Sa7US+oBXLv4/fsH+D2xVEth3Jysl3jr9z+Zw3jCodRRfhSufW7Tt3V+917z94+Ojx2vqTI1UbSeCQ1KyWJzlWwKiAQ001g5NGAuY5g+P87MM8fvwTpKK1+K5nDaQcV4KWlGDtoWztIuFYTwlm9ovbTPQUNN5CuyOUGFGAzCUmYJPSf2zs7NfMuEQZnjDKqVaZpaPYnS5gNElybo3L6Ksys5dEntBj5x7bSk8HLks0nGv7EWuMSqodenlzlYT59gvs/nV24CbXOLfSv3zj6UxRopCEyjAs6a/FZA5laxtRP1oYuu7ES2cjWNo4Ww9/J0VNDAehCcNKTeKo0anFUlPCwHUTo6DB5AxXMPGuwBxUahcaOPTCIwUqa+mP0GiB/n/DYq7UjOc+cz6XuhqbgzfFJkaXO6mlojEaBLksVBqGdI3mgqKCSiCazbyDiaS+V0Sm2O9Se9lbVXLemsESLAiw9lx4u8ENyJ6n15iN7BsqemiHCtfjWFZUjKK+/+umtoKag5azNqWRzKFuC5pPpOuaqXYmZlXtu53ygd9sNymg9C94sTU7BiEOoHD24NO+s/HrYS/qRTcl7TMDy6woinvR0J93set66eOrQl93jgb9eNgffNve2NtfPoLV4FnwPNgM4uBtsBd8DsbBYUDC9yGEIqw7u50fnapDL1NXwuWdp0HLOvIP9X86QQ==</latexit>
<latexit
(2.1) f x; ,..., ; ,..., ;...
Physics-informed neural networks for solving PDEs. @x1 We @x d @x1 @x1
consider @x1 @xd
where B(u, x) could be Dirichlet, Neumann, Robin, or periodic
wing PDE parameterized by for the solution u(x) with x = (x1 , . . . , xd )
boundary conditions.
on For time-dependent
a domain ⌦ ⇢ Rd : Physics-informed
problems,withwe suitable
considerNeural
time t as
boundary Networks
a special component of x, and
conditions
⌦✓contains the temporal2
domain.2
The initial
◆ condition can be simply treated as a
@u @u @ u @ u
special
f x; type
,..., of Dirichlet
; boundary
,..., condition
;...; on the
= 0, x 2 spatio-temporal
⌦, B(u, x) domain.
= 0 on @⌦,
@x1 @xd @x1 @x1 @x1 @xd
table boundary conditions PDE( ) B(u, x) could be Dirichlet, Neumann, Robin, or p
where
For time-dependent problems, we consider time t as a
B(u, x) = 0 @on @⌦,
⌦ contains the temporal domain.
@t T The initial condition
NN(x, t; ✓) @ û @ 2 û
f
specialortype
(u, x) could be Dirichlet, Neumann, Robin, of @t
periodic Dirichlet
boundary boundary condition on the spat
@x2 conditions.
@ 2 t as a special component of x, and
-dependent problems, we consider time
@x2
ns the temporal domain. The initial condition can be simply treated as a )
PDE(
ype ofxDirichlet boundary condition on the spatio-temporal domain.
.. .. û @ Minimize
t . PDE(
. ) I NN(x, t; ✓)û(x, t) gD (x, t) @t Loss ✓⇤ 2
@ û @ û
@t @x2
@
@t @ @ û Tf @2 Tb
@n (x, t) gR (u, x, t)
t; ✓) @ û
@n @ 2 û @x2
@t @x2
@ 2
x
@x2
BC & IC
..Automatic .. differentiation

û(x, t)1499-1511.
gD (x,
t
Psichogios, D. C., & Ungar, L. H. (1992). A hybrid neural network-first .
principles .
approach
I
to process modeling. AIChE Journal, 38(10),
.. .. û Minimize @u2
@ u
Fig. 1. Schematic I of a PINN û(x, for
t) solving
g (x, the
t) di↵usion Loss equation ✓ =
⇤ 2 with mixed boundary
Lagaris,
. I. .E., Likas, A., & Fotiadis, D. I. (1998). Artificial neural networks for solving ordinary and partial differential equations. IEEE transactions on
D @t @x
@u
conditions
neural (BC)
networks, 9(5), u(x, t) = gD (x, t) on D ⇢ @⌦ and @n (x, t) = gR (u,@x, t) on R @⇢û@⌦. The initial
987-1000.
T @n @n (x, t) gR (u, x
condition
Raissi, (IC)
M., Perdikaris, P., &isKarniadakis,
treated
@ as
G. E.a(2019).
@ ûspecial type of boundary
Physics-informed b conditions.
neural networks: A deep learning and Tbfordenote
Tfframework the two
solving forward sets of
and inverse
problems
residual involving nonlinear
points for@npartial differential
the equation
(x, t) g
@n equations. Journal
and BC/IC.
(u, x, t)
R of Computational Physics, 378, 686-707.

BC & equations. arXiv
Lu, L., Meng, X., Mao, Z., & Karniadakis, G. E. (2019). DeepXDE: A deep learning library for solving differential IC preprint arXiv:
1907.04502. BC & IC
sha1_base64="g3Odr6CDVtYyAkpjPR+5gn1uHt4=">AAACh3icbVFLa9tAEF6pryR9Oe0xl6F2IIXgSjmkOaa0hR5dqOOAZcxqNbIXr3bFPpoYod/Q39djf0TvHSs+5NGBZb6Z+YaZ/SavlXQ+SX5H8aPHT54+29nde/7i5avXvf03F84EK3AsjDL2MucOldQ49tIrvKwt8ipXOMlXnzf1yU+0Thr9w69rnFV8oWUpBfeUmvd+ZdpIXaD2MEGorRGIBeRr4DUF17Iinl7AIMurJrRHG3fdHvv3g44DBWINGoPlipy/MnYFg3LeZH6Jnt/mHwPXBfFLWhSoChadLAL1mbKLR1++AnfzXj8ZJp3BQ5BuQZ9tbTTv/ckKI0JFPxCKOzdNk9rPGm69FArbvSw4rLlY8QVOCWpeoZs1nXAtHFKmgNJYeqRAl73d0fDKuXWVE5OEWLr7tU3yf7Vp8OXZrJG6Dh61uBlUBgXewOYKUEiLwqs1AS6spF1BLLnlwtOt7kzJq5Y0Se8r8BBcnAzTZJh+P+mfn27V2WEH7B07Yin7yM7ZNzZiYybY3+ggGkSH8W78IT6Nz26ocbTtecvuWPzpH/1Vw+k=</latexit>
sha1_base64="g3Odr6CDVtYyAkpjPR+5gn1uHt4=">AAACh3icbVFLa9tAEF6pryR9Oe0xl6F2IIXgSjmkOaa0hR5dqOOAZcxqNbIXr3bFPpoYod/Q39djf0TvHSs+5NGBZb6Z+YaZ/SavlXQ+SX5H8aPHT54+29nde/7i5avXvf03F84EK3AsjDL2MucOldQ49tIrvKwt8ipXOMlXnzf1yU+0Thr9w69rnFV8oWUpBfeUmvd+ZdpIXaD2MEGorRGIBeRr4DUF17Iinl7AIMurJrRHG3fdHvv3g44DBWINGoPlipy/MnYFg3LeZH6Jnt/mHwPXBfFLWhSoChadLAL1mbKLR1++AnfzXj8ZJp3BQ5BuQZ9tbTTv/ckKI0JFPxCKOzdNk9rPGm69FArbvSw4rLlY8QVOCWpeoZs1nXAtHFKmgNJYeqRAl73d0fDKuXWVE5OEWLr7tU3yf7Vp8OXZrJG6Dh61uBlUBgXewOYKUEiLwqs1AS6spF1BLLnlwtOt7kzJq5Y0Se8r8BBcnAzTZJh+P+mfn27V2WEH7B07Yin7yM7ZNzZiYybY3+ggGkSH8W78IT6Nz26ocbTtecvuWPzpH/1Vw+k=</latexit><latexit
sha1_base64="g3Odr6CDVtYyAkpjPR+5gn1uHt4=">AAACh3icbVFLa9tAEF6pryR9Oe0xl6F2IIXgSjmkOaa0hR5dqOOAZcxqNbIXr3bFPpoYod/Q39djf0TvHSs+5NGBZb6Z+YaZ/SavlXQ+SX5H8aPHT54+29nde/7i5avXvf03F84EK3AsjDL2MucOldQ49tIrvKwt8ipXOMlXnzf1yU+0Thr9w69rnFV8oWUpBfeUmvd+ZdpIXaD2MEGorRGIBeRr4DUF17Iinl7AIMurJrRHG3fdHvv3g44DBWINGoPlipy/MnYFg3LeZH6Jnt/mHwPXBfFLWhSoChadLAL1mbKLR1++AnfzXj8ZJp3BQ5BuQZ9tbTTv/ckKI0JFPxCKOzdNk9rPGm69FArbvSw4rLlY8QVOCWpeoZs1nXAtHFKmgNJYeqRAl73d0fDKuXWVE5OEWLr7tU3yf7Vp8OXZrJG6Dh61uBlUBgXewOYKUEiLwqs1AS6spF1BLLnlwtOt7kzJq5Y0Se8r8BBcnAzTZJh+P+mfn27V2WEH7B07Yin7yM7ZNzZiYybY3+ggGkSH8W78IT6Nz26ocbTtecvuWPzpH/1Vw+k=</latexit><latexit
sha1_base64="g3Odr6CDVtYyAkpjPR+5gn1uHt4=">AAACh3icbVFLa9tAEF6pryR9Oe0xl6F2IIXgSjmkOaa0hR5dqOOAZcxqNbIXr3bFPpoYod/Q39djf0TvHSs+5NGBZb6Z+YaZ/SavlXQ+SX5H8aPHT54+29nde/7i5avXvf03F84EK3AsjDL2MucOldQ49tIrvKwt8ipXOMlXnzf1yU+0Thr9w69rnFV8oWUpBfeUmvd+ZdpIXaD2MEGorRGIBeRr4DUF17Iinl7AIMurJrRHG3fdHvv3g44DBWINGoPlipy/MnYFg3LeZH6Jnt/mHwPXBfFLWhSoChadLAL1mbKLR1++AnfzXj8ZJp3BQ5BuQZ9tbTTv/ckKI0JFPxCKOzdNk9rPGm69FArbvSw4rLlY8QVOCWpeoZs1nXAtHFKmgNJYeqRAl73d0fDKuXWVE5OEWLr7tU3yf7Vp8OXZrJG6Dh61uBlUBgXewOYKUEiLwqs1AS6spF1BLLnlwtOt7kzJq5Y0Se8r8BBcnAzTZJh+P+mfn27V2WEH7B07Yin7yM7ZNzZiYybY3+ggGkSH8W78IT6Nz26ocbTtecvuWPzpH/1Vw+k=</latexit><latexit
<latexit sha1_base64="k4K1V+WdTsjhgiTKcGzkuIuAYBU=">AAAC2nicbVFLbxMxEPYur1JeAY5cLFKkRIJotwfgWAmQyqUKEmkrJVE068wmVvxYPDZtFOXCDXHlz3GEX4Kd5kBbRrI872/mm6pRknxR/MryGzdv3b6zc3f33v0HDx+1Hj85JhucwIGwyrrTCgiVNDjw0is8bRyCrhSeVIt3KX7yFR1Jaz77ZYNjDTMjaynAR9ek9WdkrDRTNJ7350uSgl5JU1unccoNBgcqfv7MugXxTv/j0RF1OUjNwfOYh85JM+PAhTVemmADcQU+dauDEQmC740qvQrrTvrO1y99d4/7eSwHJwmJA0UTOVkVNunexnacluRRc1tzY03aDRxvwHkZ55nKOgJHjGTgl7DZJE33/kM3VaR2MzSYZk+bTFrtoldshF9Xyq3SZlvpT1q/R1Mrgo4QQgHRsCwaP14leKFwvTsKhA2IBcxwGFUDGmm82txizV9EzzThxhdp2Hj/rViBJlrqKmZq8HO6GkvO/8WGwddvxytpmhD5FRdAdVCJsHTYSItD4dUyKiCcjLNyMQcHwsfzX0Kp9DpyUl5l4LpyvN8ri175ab998HrLzg57xp6zDivZG3bADlmfDZjIDjOTnWXn+Sj/ln/Pf1yk5tm25im7JPnPvzcg5cE=</latexit>
sha1_base64="k4K1V+WdTsjhgiTKcGzkuIuAYBU=">AAAC2nicbVFLbxMxEPYur1JeAY5cLFKkRIJotwfgWAmQyqUKEmkrJVE068wmVvxYPDZtFOXCDXHlz3GEX4Kd5kBbRrI872/mm6pRknxR/MryGzdv3b6zc3f33v0HDx+1Hj85JhucwIGwyrrTCgiVNDjw0is8bRyCrhSeVIt3KX7yFR1Jaz77ZYNjDTMjaynAR9ek9WdkrDRTNJ7350uSgl5JU1unccoNBgcqfv7MugXxTv/j0RF1OUjNwfOYh85JM+PAhTVemmADcQU+dauDEQmC740qvQrrTvrO1y99d4/7eSwHJwmJA0UTOVkVNunexnacluRRc1tzY03aDRxvwHkZ55nKOgJHjGTgl7DZJE33/kM3VaR2MzSYZk+bTFrtoldshF9Xyq3SZlvpT1q/R1Mrgo4QQgHRsCwaP14leKFwvTsKhA2IBcxwGFUDGmm82txizV9EzzThxhdp2Hj/rViBJlrqKmZq8HO6GkvO/8WGwddvxytpmhD5FRdAdVCJsHTYSItD4dUyKiCcjLNyMQcHwsfzX0Kp9DpyUl5l4LpyvN8ri175ab998HrLzg57xp6zDivZG3bADlmfDZjIDjOTnWXn+Sj/ln/Pf1yk5tm25im7JPnPvzcg5cE=</latexit><latexit
sha1_base64="k4K1V+WdTsjhgiTKcGzkuIuAYBU=">AAAC2nicbVFLbxMxEPYur1JeAY5cLFKkRIJotwfgWAmQyqUKEmkrJVE068wmVvxYPDZtFOXCDXHlz3GEX4Kd5kBbRrI872/mm6pRknxR/MryGzdv3b6zc3f33v0HDx+1Hj85JhucwIGwyrrTCgiVNDjw0is8bRyCrhSeVIt3KX7yFR1Jaz77ZYNjDTMjaynAR9ek9WdkrDRTNJ7350uSgl5JU1unccoNBgcqfv7MugXxTv/j0RF1OUjNwfOYh85JM+PAhTVemmADcQU+dauDEQmC740qvQrrTvrO1y99d4/7eSwHJwmJA0UTOVkVNunexnacluRRc1tzY03aDRxvwHkZ55nKOgJHjGTgl7DZJE33/kM3VaR2MzSYZk+bTFrtoldshF9Xyq3SZlvpT1q/R1Mrgo4QQgHRsCwaP14leKFwvTsKhA2IBcxwGFUDGmm82txizV9EzzThxhdp2Hj/rViBJlrqKmZq8HO6GkvO/8WGwddvxytpmhD5FRdAdVCJsHTYSItD4dUyKiCcjLNyMQcHwsfzX0Kp9DpyUl5l4LpyvN8ri175ab998HrLzg57xp6zDivZG3bADlmfDZjIDjOTnWXn+Sj/ln/Pf1yk5tm25im7JPnPvzcg5cE=</latexit><latexit
sha1_base64="k4K1V+WdTsjhgiTKcGzkuIuAYBU=">AAAC2nicbVFLbxMxEPYur1JeAY5cLFKkRIJotwfgWAmQyqUKEmkrJVE068wmVvxYPDZtFOXCDXHlz3GEX4Kd5kBbRrI872/mm6pRknxR/MryGzdv3b6zc3f33v0HDx+1Hj85JhucwIGwyrrTCgiVNDjw0is8bRyCrhSeVIt3KX7yFR1Jaz77ZYNjDTMjaynAR9ek9WdkrDRTNJ7350uSgl5JU1unccoNBgcqfv7MugXxTv/j0RF1OUjNwfOYh85JM+PAhTVemmADcQU+dauDEQmC740qvQrrTvrO1y99d4/7eSwHJwmJA0UTOVkVNunexnacluRRc1tzY03aDRxvwHkZ55nKOgJHjGTgl7DZJE33/kM3VaR2MzSYZk+bTFrtoldshF9Xyq3SZlvpT1q/R1Mrgo4QQgHRsCwaP14leKFwvTsKhA2IBcxwGFUDGmm82txizV9EzzThxhdp2Hj/rViBJlrqKmZq8HO6GkvO/8WGwddvxytpmhD5FRdAdVCJsHTYSItD4dUyKiCcjLNyMQcHwsfzX0Kp9DpyUl5l4LpyvN8ri175ab998HrLzg57xp6zDivZG3bADlmfDZjIDjOTnWXn+Sj/ln/Pf1yk5tm25im7JPnPvzcg5cE=</latexit><latexit
<latexit
sha1_base64="Zt/H4maZJMObw7Ak7O3ilQRIvyw=">AAADWXichVJdaxQxFM3sqF3Hr6199CW4CBVhmemDSkEotoKCDyt028LOsiaZO7OhmWTIR3EZ5vf5G8Qnf4Cv+mpmO5R+rHohcHLvObnJyaWV4MbG8begF966fWejfze6d//Bw0eDzcdHRjnNYMKUUPqEEgOCS5hYbgWcVBpISQUc09P9tn58BtpwJQ/tsoJZSQrJc86I9an5ZvA5lYrLDKTFhwvATGkNplIy47LAQhmDcydZS8bc4IKfgcR0GaUUCi5rInghmwj7SEtiF4yI+mOzndoFWPJ8903q/NGaasKgvkSY1+6C5DephS8W1wfEEpxzi5vmxV+Feo1wfPAO+1vzzBHxT7Gb13Gz5oAP++a/jb2WrtO+vdBib6V0JQUdpSCzzpz5YBiP4lXgmyDpwBB1MZ4PfqSZYq70X8IEMWaaxJWd1URbzgQ0UeoMVISdkgKmHkpSgpnVq1Fo8DOfyXCutF/+S1fZy4qalMYsS+qZ7ePM9VqbXFebOpu/ntVcVs6CZOeNciewVbidK5xxDcyKpQeEae7vitmCeButn74rXWjZepJcd+AmONoZJfEo+bQz3HvZudNHT9BTtI0S9ArtofdojCaIBV+Dn8Gv4HfvexiE/TA6p/aCTrOFrkS49QcieBnC</latexit>
sha1_base64="Zt/H4maZJMObw7Ak7O3ilQRIvyw=">AAADWXichVJdaxQxFM3sqF3Hr6199CW4CBVhmemDSkEotoKCDyt028LOsiaZO7OhmWTIR3EZ5vf5G8Qnf4Cv+mpmO5R+rHohcHLvObnJyaWV4MbG8begF966fWejfze6d//Bw0eDzcdHRjnNYMKUUPqEEgOCS5hYbgWcVBpISQUc09P9tn58BtpwJQ/tsoJZSQrJc86I9an5ZvA5lYrLDKTFhwvATGkNplIy47LAQhmDcydZS8bc4IKfgcR0GaUUCi5rInghmwj7SEtiF4yI+mOzndoFWPJ8903q/NGaasKgvkSY1+6C5DephS8W1wfEEpxzi5vmxV+Feo1wfPAO+1vzzBHxT7Gb13Gz5oAP++a/jb2WrtO+vdBib6V0JQUdpSCzzpz5YBiP4lXgmyDpwBB1MZ4PfqSZYq70X8IEMWaaxJWd1URbzgQ0UeoMVISdkgKmHkpSgpnVq1Fo8DOfyXCutF/+S1fZy4qalMYsS+qZ7ePM9VqbXFebOpu/ntVcVs6CZOeNciewVbidK5xxDcyKpQeEae7vitmCeButn74rXWjZepJcd+AmONoZJfEo+bQz3HvZudNHT9BTtI0S9ArtofdojCaIBV+Dn8Gv4HfvexiE/TA6p/aCTrOFrkS49QcieBnC</latexit><latexit
sha1_base64="Zt/H4maZJMObw7Ak7O3ilQRIvyw=">AAADWXichVJdaxQxFM3sqF3Hr6199CW4CBVhmemDSkEotoKCDyt028LOsiaZO7OhmWTIR3EZ5vf5G8Qnf4Cv+mpmO5R+rHohcHLvObnJyaWV4MbG8begF966fWejfze6d//Bw0eDzcdHRjnNYMKUUPqEEgOCS5hYbgWcVBpISQUc09P9tn58BtpwJQ/tsoJZSQrJc86I9an5ZvA5lYrLDKTFhwvATGkNplIy47LAQhmDcydZS8bc4IKfgcR0GaUUCi5rInghmwj7SEtiF4yI+mOzndoFWPJ8903q/NGaasKgvkSY1+6C5DephS8W1wfEEpxzi5vmxV+Feo1wfPAO+1vzzBHxT7Gb13Gz5oAP++a/jb2WrtO+vdBib6V0JQUdpSCzzpz5YBiP4lXgmyDpwBB1MZ4PfqSZYq70X8IEMWaaxJWd1URbzgQ0UeoMVISdkgKmHkpSgpnVq1Fo8DOfyXCutF/+S1fZy4qalMYsS+qZ7ePM9VqbXFebOpu/ntVcVs6CZOeNciewVbidK5xxDcyKpQeEae7vitmCeButn74rXWjZepJcd+AmONoZJfEo+bQz3HvZudNHT9BTtI0S9ArtofdojCaIBV+Dn8Gv4HfvexiE/TA6p/aCTrOFrkS49QcieBnC</latexit><latexit
sha1_base64="Zt/H4maZJMObw7Ak7O3ilQRIvyw=">AAADWXichVJdaxQxFM3sqF3Hr6199CW4CBVhmemDSkEotoKCDyt028LOsiaZO7OhmWTIR3EZ5vf5G8Qnf4Cv+mpmO5R+rHohcHLvObnJyaWV4MbG8begF966fWejfze6d//Bw0eDzcdHRjnNYMKUUPqEEgOCS5hYbgWcVBpISQUc09P9tn58BtpwJQ/tsoJZSQrJc86I9an5ZvA5lYrLDKTFhwvATGkNplIy47LAQhmDcydZS8bc4IKfgcR0GaUUCi5rInghmwj7SEtiF4yI+mOzndoFWPJ8903q/NGaasKgvkSY1+6C5DephS8W1wfEEpxzi5vmxV+Feo1wfPAO+1vzzBHxT7Gb13Gz5oAP++a/jb2WrtO+vdBib6V0JQUdpSCzzpz5YBiP4lXgmyDpwBB1MZ4PfqSZYq70X8IEMWaaxJWd1URbzgQ0UeoMVISdkgKmHkpSgpnVq1Fo8DOfyXCutF/+S1fZy4qalMYsS+qZ7ePM9VqbXFebOpu/ntVcVs6CZOeNciewVbidK5xxDcyKpQeEae7vitmCeButn74rXWjZepJcd+AmONoZJfEo+bQz3HvZudNHT9BTtI0S9ArtofdojCaIBV+Dn8Gv4HfvexiE/TA6p/aCTrOFrkS49QcieBnC</latexit><latexit
<latexit
sha1_base64="wymmur7ludfDE4q0FDa/gRgGDF4=">AAACinicbVFNa9wwEJXdr3Sbttv2WCiiS+m2JYu9hyZpKATaQw89pJBNAuvFjOVZr4g8MpIcWMz+ify7Hvsvcqy860CTdEDo6c0bzegpq5S0Lop+B+G9+w8ePtp63Huy/fTZ8/6LlydW10bgRGilzVkGFpUknDjpFJ5VBqHMFJ5m59/a/OkFGis1HbtlhbMSCpJzKcB5Ku1fJqQl5UiOHxuQJKngFxK4dVoswDopeGEgl60gRyv8/qWXZFhIakDJgj6ueolboIO0oU/xin/l3ZH4Dk884AlBpiDd0DwpwS0EqObnanit/NBLkPLr+9L+IBpF6+B3QdyBAeviKO3/SXIt6tLPJhRYO42jys0aMH56hX6+2mIF4hwKnHpIUKKdNWvvVvydZ3I+18Yv/8Y1+29FA6W1yzLzynZyezvXkv/LTWs335s1kqraIYlNo3mtuNO8/QieS4PCqaUHIIxsnfaOGxDOf9eNLlnZehLfduAuOBmP4mgU/xoPDj937myx1+wtG7KY7bJD9oMdsQkT7Cp4E7wPhuF2OA73w4ONNAy6mlfsRoTf/wJcl8ac</latexit>
sha1_base64="wymmur7ludfDE4q0FDa/gRgGDF4=">AAACinicbVFNa9wwEJXdr3Sbttv2WCiiS+m2JYu9hyZpKATaQw89pJBNAuvFjOVZr4g8MpIcWMz+ify7Hvsvcqy860CTdEDo6c0bzegpq5S0Lop+B+G9+w8ePtp63Huy/fTZ8/6LlydW10bgRGilzVkGFpUknDjpFJ5VBqHMFJ5m59/a/OkFGis1HbtlhbMSCpJzKcB5Ku1fJqQl5UiOHxuQJKngFxK4dVoswDopeGEgl60gRyv8/qWXZFhIakDJgj6ueolboIO0oU/xin/l3ZH4Dk884AlBpiDd0DwpwS0EqObnanit/NBLkPLr+9L+IBpF6+B3QdyBAeviKO3/SXIt6tLPJhRYO42jys0aMH56hX6+2mIF4hwKnHpIUKKdNWvvVvydZ3I+18Yv/8Y1+29FA6W1yzLzynZyezvXkv/LTWs335s1kqraIYlNo3mtuNO8/QieS4PCqaUHIIxsnfaOGxDOf9eNLlnZehLfduAuOBmP4mgU/xoPDj937myx1+wtG7KY7bJD9oMdsQkT7Cp4E7wPhuF2OA73w4ONNAy6mlfsRoTf/wJcl8ac</latexit><latexit
sha1_base64="wymmur7ludfDE4q0FDa/gRgGDF4=">AAACinicbVFNa9wwEJXdr3Sbttv2WCiiS+m2JYu9hyZpKATaQw89pJBNAuvFjOVZr4g8MpIcWMz+ify7Hvsvcqy860CTdEDo6c0bzegpq5S0Lop+B+G9+w8ePtp63Huy/fTZ8/6LlydW10bgRGilzVkGFpUknDjpFJ5VBqHMFJ5m59/a/OkFGis1HbtlhbMSCpJzKcB5Ku1fJqQl5UiOHxuQJKngFxK4dVoswDopeGEgl60gRyv8/qWXZFhIakDJgj6ueolboIO0oU/xin/l3ZH4Dk884AlBpiDd0DwpwS0EqObnanit/NBLkPLr+9L+IBpF6+B3QdyBAeviKO3/SXIt6tLPJhRYO42jys0aMH56hX6+2mIF4hwKnHpIUKKdNWvvVvydZ3I+18Yv/8Y1+29FA6W1yzLzynZyezvXkv/LTWs335s1kqraIYlNo3mtuNO8/QieS4PCqaUHIIxsnfaOGxDOf9eNLlnZehLfduAuOBmP4mgU/xoPDj937myx1+wtG7KY7bJD9oMdsQkT7Cp4E7wPhuF2OA73w4ONNAy6mlfsRoTf/wJcl8ac</latexit><latexit
sha1_base64="wymmur7ludfDE4q0FDa/gRgGDF4=">AAACinicbVFNa9wwEJXdr3Sbttv2WCiiS+m2JYu9hyZpKATaQw89pJBNAuvFjOVZr4g8MpIcWMz+ify7Hvsvcqy860CTdEDo6c0bzegpq5S0Lop+B+G9+w8ePtp63Huy/fTZ8/6LlydW10bgRGilzVkGFpUknDjpFJ5VBqHMFJ5m59/a/OkFGis1HbtlhbMSCpJzKcB5Ku1fJqQl5UiOHxuQJKngFxK4dVoswDopeGEgl60gRyv8/qWXZFhIakDJgj6ueolboIO0oU/xin/l3ZH4Dk884AlBpiDd0DwpwS0EqObnanit/NBLkPLr+9L+IBpF6+B3QdyBAeviKO3/SXIt6tLPJhRYO42jys0aMH56hX6+2mIF4hwKnHpIUKKdNWvvVvydZ3I+18Yv/8Y1+29FA6W1yzLzynZyezvXkv/LTWs335s1kqraIYlNo3mtuNO8/QieS4PCqaUHIIxsnfaOGxDOf9eNLlnZehLfduAuOBmP4mgU/xoPDj937myx1+wtG7KY7bJD9oMdsQkT7Cp4E7wPhuF2OA73w4ONNAy6mlfsRoTf/wJcl8ac</latexit><latexit
<latexit
sha1_base64="UohXjcXIJLY/cMMVIp1KZsCa4JM=">AAADBHicdVJNb9QwEHXCR8sW6BaOXEasQK1YrRKESi+VKnHhBEXqtpXiKHK83qxV24lsB3UV5drf0CucuSGu/A+O/BOcj8NuW8ayNDNv5j177LQQ3Ngg+OP59+4/eLix+Wiw9fjJ0+3hzrNTk5easinNRa7PU2KY4IpNLbeCnReaEZkKdpZefGjws69MG56rE7ssWCxJpvicU2JdKtnxtnDKMq4q49RsPYDGXuNUVmWdWHgDWBK7oERUn+qkavKXdR11eAyHEIwBuwUdApgrwJ8ly8gYbBtFruIkBozXqHe7+jEEe45k0Yd7HdnlCs1/G23TmK2EXe+q6B1HK4i2nIievGXGTM362yfDUTAJWoPbTtg7I9TbcTL8i2c5LSVTlgpiTBQGhY2rRoQKVg9waVhB6AXJWORcRSQzcdW+Wg2vXGYG81y7rSy02dWOikhjljJ1lc0bmJtYk7wLi0o7P4grrorSMkU7oXkpwObQfAGYcc2oFUvnEKq5OyvQBdGEWvdR1lRS2cwkvDmB287p20kYTMIv70ZH+/10NtEL9BLtohC9R0foIzpGU0Q97V1737zv/pX/w//p/+pKfa/veY7WzP/9D9v16Dw=</latexit>
sha1_base64="UohXjcXIJLY/cMMVIp1KZsCa4JM=">AAADBHicdVJNb9QwEHXCR8sW6BaOXEasQK1YrRKESi+VKnHhBEXqtpXiKHK83qxV24lsB3UV5drf0CucuSGu/A+O/BOcj8NuW8ayNDNv5j177LQQ3Ngg+OP59+4/eLix+Wiw9fjJ0+3hzrNTk5easinNRa7PU2KY4IpNLbeCnReaEZkKdpZefGjws69MG56rE7ssWCxJpvicU2JdKtnxtnDKMq4q49RsPYDGXuNUVmWdWHgDWBK7oERUn+qkavKXdR11eAyHEIwBuwUdApgrwJ8ly8gYbBtFruIkBozXqHe7+jEEe45k0Yd7HdnlCs1/G23TmK2EXe+q6B1HK4i2nIievGXGTM362yfDUTAJWoPbTtg7I9TbcTL8i2c5LSVTlgpiTBQGhY2rRoQKVg9waVhB6AXJWORcRSQzcdW+Wg2vXGYG81y7rSy02dWOikhjljJ1lc0bmJtYk7wLi0o7P4grrorSMkU7oXkpwObQfAGYcc2oFUvnEKq5OyvQBdGEWvdR1lRS2cwkvDmB287p20kYTMIv70ZH+/10NtEL9BLtohC9R0foIzpGU0Q97V1737zv/pX/w//p/+pKfa/veY7WzP/9D9v16Dw=</latexit><latexit
sha1_base64="UohXjcXIJLY/cMMVIp1KZsCa4JM=">AAADBHicdVJNb9QwEHXCR8sW6BaOXEasQK1YrRKESi+VKnHhBEXqtpXiKHK83qxV24lsB3UV5drf0CucuSGu/A+O/BOcj8NuW8ayNDNv5j177LQQ3Ngg+OP59+4/eLix+Wiw9fjJ0+3hzrNTk5easinNRa7PU2KY4IpNLbeCnReaEZkKdpZefGjws69MG56rE7ssWCxJpvicU2JdKtnxtnDKMq4q49RsPYDGXuNUVmWdWHgDWBK7oERUn+qkavKXdR11eAyHEIwBuwUdApgrwJ8ly8gYbBtFruIkBozXqHe7+jEEe45k0Yd7HdnlCs1/G23TmK2EXe+q6B1HK4i2nIievGXGTM362yfDUTAJWoPbTtg7I9TbcTL8i2c5LSVTlgpiTBQGhY2rRoQKVg9waVhB6AXJWORcRSQzcdW+Wg2vXGYG81y7rSy02dWOikhjljJ1lc0bmJtYk7wLi0o7P4grrorSMkU7oXkpwObQfAGYcc2oFUvnEKq5OyvQBdGEWvdR1lRS2cwkvDmB287p20kYTMIv70ZH+/10NtEL9BLtohC9R0foIzpGU0Q97V1737zv/pX/w//p/+pKfa/veY7WzP/9D9v16Dw=</latexit><latexit
sha1_base64="UohXjcXIJLY/cMMVIp1KZsCa4JM=">AAADBHicdVJNb9QwEHXCR8sW6BaOXEasQK1YrRKESi+VKnHhBEXqtpXiKHK83qxV24lsB3UV5drf0CucuSGu/A+O/BOcj8NuW8ayNDNv5j177LQQ3Ngg+OP59+4/eLix+Wiw9fjJ0+3hzrNTk5easinNRa7PU2KY4IpNLbeCnReaEZkKdpZefGjws69MG56rE7ssWCxJpvicU2JdKtnxtnDKMq4q49RsPYDGXuNUVmWdWHgDWBK7oERUn+qkavKXdR11eAyHEIwBuwUdApgrwJ8ly8gYbBtFruIkBozXqHe7+jEEe45k0Yd7HdnlCs1/G23TmK2EXe+q6B1HK4i2nIievGXGTM362yfDUTAJWoPbTtg7I9TbcTL8i2c5LSVTlgpiTBQGhY2rRoQKVg9waVhB6AXJWORcRSQzcdW+Wg2vXGYG81y7rSy02dWOikhjljJ1lc0bmJtYk7wLi0o7P4grrorSMkU7oXkpwObQfAGYcc2oFUvnEKq5OyvQBdGEWvdR1lRS2cwkvDmB287p20kYTMIv70ZH+/10NtEL9BLtohC9R0foIzpGU0Q97V1737zv/pX/w//p/+pKfa/veY7WzP/9D9v16Dw=</latexit><latexit
<latexit
sha1_base64="yY03LYg6yHAHT3t1MtH47E/vel8=">AAACcnicbVHRShwxFM2MVldbddQ3fYkuBUVZZqS0RRAWfPGpWHBV2BmWO9mMG0xmhuSOdDvkQ33Tf+gHNLMdpLpeCJyck5N7c5KWUhgMw0fPX1j8sLTcWVn9+GltfSPY3Lo2RaUZH7BCFvo2BcOlyPkABUp+W2oOKpX8Jr0/b/SbB66NKPIrnJY8UXCXi0wwQEeNgt9xqmptR3WME45gD5r9L3tM8ZCentE408DquASNAqR9QRRtNudxliMaK8AJA1n/aC6dCXb43tlkFHTDXjgrOg+iFnRJW5ej4DkeF6xSPEcmwZhhFJaY1M1ETHK7GleGl8Du4Y4PHcxBcZPUs4ws/eyYMc0K7VaOdMb+76hBGTNVqTvZvMC81RryPW1YYfY9qUVeVshz9q9RVrmECtoETsdCc4Zy6gAwLdyslE3AxYruW151SZV1mURvE5gH1ye9KOxFP790+1/bdDpkl+yTAxKRb6RPLsglGRBGnrwlb8MLvD/+jr/nt1H6XuvZJq/KP/4L3xG/1g==</latexit>
sha1_base64="yY03LYg6yHAHT3t1MtH47E/vel8=">AAACcnicbVHRShwxFM2MVldbddQ3fYkuBUVZZqS0RRAWfPGpWHBV2BmWO9mMG0xmhuSOdDvkQ33Tf+gHNLMdpLpeCJyck5N7c5KWUhgMw0fPX1j8sLTcWVn9+GltfSPY3Lo2RaUZH7BCFvo2BcOlyPkABUp+W2oOKpX8Jr0/b/SbB66NKPIrnJY8UXCXi0wwQEeNgt9xqmptR3WME45gD5r9L3tM8ZCentE408DquASNAqR9QRRtNudxliMaK8AJA1n/aC6dCXb43tlkFHTDXjgrOg+iFnRJW5ej4DkeF6xSPEcmwZhhFJaY1M1ETHK7GleGl8Du4Y4PHcxBcZPUs4ws/eyYMc0K7VaOdMb+76hBGTNVqTvZvMC81RryPW1YYfY9qUVeVshz9q9RVrmECtoETsdCc4Zy6gAwLdyslE3AxYruW151SZV1mURvE5gH1ye9KOxFP790+1/bdDpkl+yTAxKRb6RPLsglGRBGnrwlb8MLvD/+jr/nt1H6XuvZJq/KP/4L3xG/1g==</latexit><latexit
sha1_base64="yY03LYg6yHAHT3t1MtH47E/vel8=">AAACcnicbVHRShwxFM2MVldbddQ3fYkuBUVZZqS0RRAWfPGpWHBV2BmWO9mMG0xmhuSOdDvkQ33Tf+gHNLMdpLpeCJyck5N7c5KWUhgMw0fPX1j8sLTcWVn9+GltfSPY3Lo2RaUZH7BCFvo2BcOlyPkABUp+W2oOKpX8Jr0/b/SbB66NKPIrnJY8UXCXi0wwQEeNgt9xqmptR3WME45gD5r9L3tM8ZCentE408DquASNAqR9QRRtNudxliMaK8AJA1n/aC6dCXb43tlkFHTDXjgrOg+iFnRJW5ej4DkeF6xSPEcmwZhhFJaY1M1ETHK7GleGl8Du4Y4PHcxBcZPUs4ws/eyYMc0K7VaOdMb+76hBGTNVqTvZvMC81RryPW1YYfY9qUVeVshz9q9RVrmECtoETsdCc4Zy6gAwLdyslE3AxYruW151SZV1mURvE5gH1ye9KOxFP790+1/bdDpkl+yTAxKRb6RPLsglGRBGnrwlb8MLvD/+jr/nt1H6XuvZJq/KP/4L3xG/1g==</latexit><latexit
sha1_base64="yY03LYg6yHAHT3t1MtH47E/vel8=">AAACcnicbVHRShwxFM2MVldbddQ3fYkuBUVZZqS0RRAWfPGpWHBV2BmWO9mMG0xmhuSOdDvkQ33Tf+gHNLMdpLpeCJyck5N7c5KWUhgMw0fPX1j8sLTcWVn9+GltfSPY3Lo2RaUZH7BCFvo2BcOlyPkABUp+W2oOKpX8Jr0/b/SbB66NKPIrnJY8UXCXi0wwQEeNgt9xqmptR3WME45gD5r9L3tM8ZCentE408DquASNAqR9QRRtNudxliMaK8AJA1n/aC6dCXb43tlkFHTDXjgrOg+iFnRJW5ej4DkeF6xSPEcmwZhhFJaY1M1ETHK7GleGl8Du4Y4PHcxBcZPUs4ws/eyYMc0K7VaOdMb+76hBGTNVqTvZvMC81RryPW1YYfY9qUVeVshz9q9RVrmECtoETsdCc4Zy6gAwLdyslE3AxYruW151SZV1mURvE5gH1ye9KOxFP790+1/bdDpkl+yTAxKRb6RPLsglGRBGnrwlb8MLvD/+jr/nt1H6XuvZJq/KP/4L3xG/1g==</latexit><latexit
<latexit
Data fit
| {z }
define the residual of the PDE as

L(✓) := Lu (✓) +
@
@t

✓n+1 = ✓n
Training via stochastic gradient descent:
Lr (✓)
| {z }
The corresponding loss function is given by
u(x, 0) = h(x), x 2 ⌦

PDE residual
di↵erential equations (PDE) of the general form

⌘r✓ L(✓n )
ICs fit
ut + Nx [u] = 0, x 2 ⌦, t 2 [0, T ]

r✓ (x, t) := f✓ (x, t) + Nx [f✓ (x, t)]


u(x, t) = g(x, t), t 2 [0, T ], x 2 @⌦
General formulation of PINNs

BCs fit
+ Lu0 (✓) + Lub (✓)
| {z } | {z }

and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378, 686-707.
Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward
Physics-informed neural networks (PINNs) aim at inferring a continuous latent
function u(x, t) that arises as the solution to a system of nonlinear partial

We proceed by approximating u(x, t) by a deep neural network f✓ (x, t), and

*all gradients are computed


via automatic differentiation
reads as
is notoriously hard to resolve by classical numerical methods. In one space
Physics-informed
dimension, the Burger’s
ut + uuxequation Neural
(0.01/⇡)ualong
xx = 0, with Networks
x 2 [Dirichlet
1, 1], t 2 boundary
[0, 1], conditions
(3)
reads as u(0, x) = sin(⇡x),
Example: Burgers’ equation in 1D
u(t, 1) = u(t, 1) = 0.
ut + uux (0.01/⇡)uxx = 0, x 2 [ 1, 1], t 2 [0, 1], (3)
Let us define f (t, x) to be given by
u(0, x) = sin(⇡x),
f := ut + uux (0.01/⇡)uxx ,
u(t, 1) = u(t, 1) = 0.
and proceed by approximating u(t, x) by a deep neural network. To highlight
Let us the simplicity
define f (t, x) intoimplementing
be given by this idea we have included a Python code
snippet using Tensorflow [16]; currently one of the most popular and well
documented open source libraries for machine learning computations. To
this end, u(t, x) canfbe:= ut +defined
simply uux as(0.01/⇡)uxx ,
def u(t, x):
and proceeduby approximating u(t, x) by a deep neural network. To highlight
= neural_net(tf.concat([t,x],1), weights, biases)
the simplicity in implementing
return u this idea we have included a Python code
snippet using Tensorflow [16]; currently one of the most popular and well
Correspondingly, the physics informed neural network f (t, x) takes the form
documented open source libraries for machine learning computations. To
this end, def f(t, x):
u(t, x) can be simply defined as
u = u(t, x)
u_t = tf.gradients(u, t)[0]
def u(t, x):
u_x = tf.gradients(u, x)[0]
u = neural_net(tf.concat([t,x],1),
u_xx = tf.gradients(u_x, x)[0] weights, biases)
return fu = u_t + u*u_x - (0.01/tf.pi)*u_xx
return f
u_x = tf.gradients(u, x)[0]
u_xx = tf.gradients(u_x, x)[0]
f = u_t + Physics-informed Neural
u*u_x - (0.01/tf.pi)*u_xx Networks
return f

The shared parameters between the neural networks u(t, x) and f (t, x) can
be learned by minimizing the mean squared error loss

M SE = M SEu + M SEf , (4)

where
Nu
X
1
M SEu = |u(tiu , xiu ) ui | 2 ,
Nu i=1
5
and
Nf
1 X
M SEf = |f (tif , xif )|2 .
Nf i=1

Here, {tiu , xiu , ui }N


i=1 denote the initial and boundary training data on u(t, x)
u

Nf
and {tif , xif }i=1 specify the collocations points for f (t, x). The loss M SEu
corresponds to the initial and boundary data while M SEf enforces the struc-
ture imposed by equation (3) at a finite set of collocation points.

In all benchmarks considered in this work, the total number of training


data Nu is relatively small (a few hundred up to a few thousand points), and
Physics-informed Neural Networks
u(t, x)
1.0
Data (100 points) 0.75
0.5 0.50
0.25
0.0 0.00
x 0.25
0.5 0.50
0.75
1.0
0.0 0.2 0.4 0.6 0.8
t
t = 0.25 t = 0.50 t = 0.75
1 1 1
u(t, x)

u(t, x)

u(t, x)
0 0 0

1 1 1
1 0 1 1 0 1 1 0 1
x x x
Exact Prediction

Figure 1: Burgers’ equation: Top: Predicted solution u(t, x) along with the initial and
boundary training data. In addition we are using 10,000 collocation points generated using
a Latin Hypercube Sampling strategy. Bottom: Comparison of the predicted and exact
solutions corresponding to the three temporal snapshots depicted by the white vertical
lines in the top panel. The relative L2 error for this case is 6.7 · 10 4 . Model training took
approximately 60 seconds on a single NVIDIA Titan X GPU card.
Physics-informed Neural Networks
Physics-informed neural networks

Raissi, M.,Yazdani, A., & Karniadakis, G. E. (2020). Hidden fluid mechanics: Learning velocity and pressure
fields from flow visualizations. Science.
neral di↵erential operator, u(s) are the field variables of
racy and generalizability.
he source field, and K(s) denotes an input property field
eneral
m’s di↵erential behavior.
constitutive BExtensions
operator, u(s)is are
thethe to CNNs
field variables
operator of
for boundary and GCNs Differentiable Physics-informed Graph Networks

the source field, and of


K(s) marizes how the correspondi
on the boundary the denotes
domainan S.input property field
In particular, we con- some patches are classified a
em’s constitutive
Darcy flow problem as a Bmotivating
behavior. is the operator for boundary
example throughout (e.g., Downtown LA) but som
Based on the type of connecte
d on the boundary of the domain S. In particular, we con- t=1 t=2 t=3 t=4 ent embedding vectors to edg
ng Darcy flow problem as a motivating example throughout
r · (K(s)ru(s))) = f (s), s 2 S, (2) 5.3. DPGN architecture
As explained in Section 4, DP
r · (K(s)ru(s))) = f (s),
nditions s 2 S, (2) the graph encoder, the GN b
t=1 t=2 t=3 t=4 (Figure 3). The encoder contai
formed Graph
onditions u(s) = uNetworks
D (s), s 2 D, Figure 4. Heat and wave dynamics on a graph.
e
and v , applied to node an
By passing the encoder, the fea
(3) how the heat (Top) on a vertex is dissipated and the pulse space (H) where we will cons
ru(s)u(s)
· n = ug(s),
D (s), 2 D
ss 2 N,, (Bottom) is propagated along the graph. These sequences hidden representations.
(3) show that the desired dynamics can be extracted by the
In the GN block, the node/edg
ru(s) · n = g(s), s 2 N, physics knowledge without optimizing supervised loss, and
t normal vector to the Neumann boundary
o 1 Yan Liu 1
, is the N D by the GN algorithm in San
thus, it shows that physics knowledge can be beneficial.
Here we assume that there is
nit normal vector to the Neumann boundary
y. , is the N D modeling the climate observa
5.2. Climate Data H and H0 , indicate the hidden
ry. Differentiable Physics-informed Graph Networks
nterest are PDEs for which the field variables can be com- We found that the simulated climate observations over 16 observations. For the physics
interest
iate are PDEs for
minimization ofwhich theenergy
a field field variables can be
functional com-
(potential) days around the Southern California region (Zhang et al., diffusion equation in Table 2,
Known of the continuous physical qua
priate minimization of a fieldFigureenergy functional
physics
6: Dense ! (potential)encoder-decoder
convolutional ℋ Encoderℋ′
network !$ ′
2018) by using the Weather Research and Forecasting
GN
as the deterministic!′ surrogate Decoder
(WRF) model (Skamarock et al., 2008). In this dataset, dom movement. As the most o
the region (Latitude: 32.22 to 35.14, Longitude: -119.59 varying continuously, the dif
arg min V (u; The
K). model’s input is the realization
Unknown
(4)of a random field, the model’s output is the prediction
to -116.29) is divided into 18,189 grid patches and the ob- equations that should be consi
argumin V (u; for
K).each input field including 3(4)output fields, i.e. pressure and two flux fields. The mode
physics Physicsare
servations equation the physics
recorded hourly. We provide the details inSupervised Losslaw is not directly
! u ! + ∆! Appendix. xT tions, but rather to the latent
setting because it is hard to sp
erecommon
commonin in many linearisand
many linear
trained
and with physics-constrained
nonlinear
nonlinear problems
problems
Modeling in physics
in physics
loss without target data. following the law explicitly a
are. For example,
Figure 3. Recurrent architecture to incorporate physics equation on GN. The blue blocks have learnable parameters and thewind vect
orang
nd
andserve
serve as
as the basis ofof the
the basis thefinite
finiteelement
element method.
method. For For
are objective functions. The middle core block can be repeated as many as the required time steps (T ). highly probable to follow th
should have different behavio
uch
suchpotentials
potentials cannot
cannot bebefound
found[46],
[46],oneone
Physics
Constraint
cancan consider
consider V V ually applying the equation to
theNetwork. Dense convolutional
trialtrialencoder-decoder network [9] is used as the sur
LA area efficient to introduce the con
theresidual
he residual norm
norm of
of the PDE
PDEevaluated
evaluated at at
di↵erent
di↵erent and updated states based on the known
tations. knowledge.p
The state-updating
Graph Table 2. Examples of dynamic physics in GN
rogate model, with one input channel x and three output channels [u, ⌧ , ⌧ ]
Networks
many as the order of1tempora
0 0 2
Liphy = fphy (Hi , Hprovide
i+1 , · the
· · ,finite
Hi+M 1) e
difference
! K) 22 ! + ∆!function
VV(u;
(u; K) = = RR (u;
asK)
(u; K). . in Fig. 6. The upsampling
shown Updating (5) (5) method in the decoding layers in the cur
Physics example
Lphy =
X
i
tions, the recurrent module c
Lphy physics equation needs to be a
0 v 2 SD area
e Figure
are interested
are interested in
1. Concept ofin
the
the the
rent
solution
solution
proposed
implementation
DPGN. The
= v + ↵ (v , {v
v parametric
of i
of=behaviors
i
parametric of
PDEs
se-
is
i
PDEs
nearest
}) a
for
j:(i,j)2E
for a
upsampling
Figure 5. Southern CAfollowed
u̇ = ↵r u
region by convolution,
i di↵eren
Finally, the decoder takes H
Zhu, Y., Zabaras, N., Koutsourelakis, P. S., v + ↵(
& Perdikaris, v)
P. (2019). (Diffusion
Physics-constrained deep
Instead eqn.)
learning
of for where
high-dimensional
using all patches at once, we surrogate
i sampled two subsets predictions. The following ob
modeling
ndary conditions.
quential observations from
(Temperature) are transposed
governed
i
by physics rules.
i
convolution used in the data-driven L is case.
the This
total lossis
physics-informed essentia
quantity
function offrom
DPGNthw
dary conditions.
and
Someuncertainty quantification
of the physics withoutand
rules are known labeled =data. Journal
vi00 inject
we 2vi0 them c2of
into
v Computational
a (v
0 0
of the patches, Los Angeles and San Diego areas (Figure 5),
Physics, 394,
ü = 2 2 56-81.
at time
r u DPGN. To build
phy
stepwei considered
to the predicted M 1 steps. For exam
vi + i , {vj:(i,j)2E }) forctraining a graph, each
(Solution
model
Seo, of Y.a (2019).
S.,explicitly.
& Liu, deterministic
The remained PDE
unknown
Differentiable system).
patterns
=will vGiven
2vi0be extracted
physics-informed igraph
the
v 0 )i poten- preprint
+ c2 ( networks. arXiv
patch as a vertex (similarwe
(Wave arXiv:1902.02950.
eqn.) are aware
to Santoro that the
et al. (2017)) andobservations
con- should have a di
Solution
from data.of a deterministic PDE system).
d the boundary conditions in Eq. (3), compute the solution Given the poten- nect a pair of adjacent pixels to define
property, an
the edge.
diffusion equation
L=
XT
can be used
T XM
2as the p
kŷi yi k +
and acquire data in parallel, as the prediction step and the We implemen
Physics-Informed Neural Networks for Cardiac Activation Mapping
entropy computation are of negligible computational cost. Here, Tensorflow AD
we iteratively refine the predictions and the uncertainty estimates a learning rate

Physics-informed deep learning in cardiac electrophysiology


Sahli Costabal et al.
as more data become available and the model is trained.
Physics-Informed Neural Networks for Cardiac Activation Mapping
a minibatch im
available collo
its gradient. F
Algorithm 1: Active learning algorithm to iteratively
randomly samp
identify the most efficient sampling points
and use them
Given: number of initial samples Ninit , number of active left atrium, w
learning samples NAL , set of candidate locations X cand , and divide the
number of initial training iterations Minit , number of active we use the ce
learning training iterations MAL , and empty sets X and T batches as col
that contain locations and activations times: optimization p
Randomly select Ninit samples from X cand
Remove the Ninit samples from X cand and add them to X
Acquire the values of the activation times at the Ninit
3. NUMER
locations and add them to T In this section,
Initialize the model and train it using the ADAM optimizer benchmark p
[25] for Minit iterations. left atrium. W
for i = {1, NAL } do learning algori
compute entropy H(X cand )
find the new location of maximum entropy: 3.1. Bench
arg maxx∈X cand H(x) To characteriz
remove x from X cand and add it to X design a synth
acquire activation time at x and add it to T train the the Eikonal e
Sahli Costabal et al.
model using ADAM [25] Physics-Informed Neural Networks for Cardiac Activation Mapping
for MAL iterations. conduction ve
FIGURE 5 | Correlation of uncertainty and error. For the benchmark problem, trained with 30 samples, theendcomputed entropy tends to be higher at regions where the
error is higher and the points of maximum entropy and maximum error (!) are co-located. The black circles indicate the sampling locations. following form

T(x, y) = m
2.4. Application to Surfaces From '
Electro-Anatomic Mapping V(x, y) =
During electro-anatomic
rmed neural networks for activation mapping. We use two neural networks to approximate the activation time T and the conduction velocity mapping, data can only be acquired
on the cardiac
with a loss function that accounts for the similarity between the output of the network and the data, the physics of the problem using the surface, either of the ventricles or the atria. We
e regularization terms. thus represent the resulting map as a surface in three dimensions with x, y ∈ [0
and neglect the thickness of the atrial wall. This is a reasonable of the activati
assumption since the thickness-to-diameter ratio of the atria is in N = 50 samp
the order of 0.05. Our assumption implies that the electrical wave model. We onl
can only travel along the surface and not perpendicular to it. To both the activa
zation problem to train the neural networks with different prior functions defined by the parameters θ̃ , θ̃ V , we include an additional loss term:
account for thisTconstraint, hidden layers w
parameters: which we randomly sample with Glorot initialization [20]. network and 5
NR
Additionally, we perturb our data with Gaussian noise Lwith 1 ! " #2 velocity neural
N = αN ∇T(xi ) · N i (10)
2 NR and αL2 and th
arg min L(θ T , θ V ) (7) variance σN to train each network of the ensemble with a slightly i=1
while keeping
θ T ,θ V
! "
different dataset. Our final prediction is obtained as the
This form mean
favors solutions where the activation time gradients for 50,000 AD
output of the ensemble of neural networks. are orthogonal to the surface normal N i . To implement this then train with
ty Quantification constraint, we assume a triangular discretization of either
the left or right atrium, which we obtain from magnetic
We compar
a neural netw
edSahli
in quantifying the uncertainty in our
Costabal, F.,Yang,Y., Perdikaris, P., Hurtado, D. E., & 2.3. Active Learning resonance imaging or computed-tomography imaging. We can except withou
inform FIGURE 6 | Benchmark problem and active learning. We perform 30 simulations of active learning with different initialasamples and compare
N i for them
each against
triangle.aWe
Latin
Kuhl, E.physicians
(2020). about the quality
Physics-informed neuralofnetworks
We for
take advantage of the uncertainty estimates
hypercube design. In the middle and right box plots, we observe a significant reduction in activation time
then compute
described
normalized
insurface
root
the
mean
normal
squared error (p < 10 −7
) and in
the NR collocation points as the centroids of each triangle in the
define and Gaussian
without physic
ll cardiac
as to use active
activation learning techniques intoPhysics, 8,
mapping. Frontiers previous 42.(psection to create anactive
adaptive
learningsampling
conduction velocity normalized mean absolute error < 0.015) when using the mesh strategy.
algorithm. x . We enforce We this constraint weakly by adding a factor α . as V = 1/$
Physics-informed filtering of 4D-flow MRI
Velocity Magnitude

V-velocity

W-velocity
Recent
3.2. advances
The KdV equation
As a mathematical model of waves on shallow water surfaces one could
consider the Korteweg-de Vries (KdV) equation. The KdV equation reads as
Discovery of ODEs Discovery of PDEs
Exact Dynamics Learned Dynamics ut = uux uxxx . (6)
Exact Dynamics Learned Dynamics
50 50 20 20
To obtain a set of training data 2.0 we simulate the KdV equation 2.0 (6) using

25
z
25
z conventional 10
spectral methods. In1.5particular, 10
we start from an1.5initial con-
1.0 1.0
dition u(0, x)0 = sin(⇡x/20), x 20.5[ 20,020] and assume periodic 0.5 boundary

x
0 0 conditions.10We integrate equation 0.0 (6) up10to the final time t = 40.0.0We use the
0.5 0.5
40 40
Chebfun package
20
[43] with a spectral 1.0
Fourier
20
discretization with 512 modes
20 0 20 0 and a fourth-order
0 10 explicit
20 30Runge-Kutta
40 temporal
0 10 integrator
20 30 40with time-step
t
0
20
40
y 0
20
40
y
size 10 4 . The solution is saved u(t, every t = 0.2 tot give us a total of 201
x x
snapshots. Raissi,
FigureOut
3: TheofKdV
1.0 M.equation:
this (2018).
data-set, Deep weto
A solution Hidden
the KdVPhysics
x)
generate a smaller
equation Models:
(left panel) Deep
training
is compared
0.75 subset,
to scat-
Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2018). Multisteptered inidentified
Figure 2: Lorenz System: The exact phase portrait of the Lorenz system (left panel) is the Learning
corresponding
space system
and time,
0.5 of
solution Nonlinear
correctly by
of the learned
randomly
captures
Partial
partial Differential
di↵erential
the form ofsub-sampling
equationEquations.
Data (200 points)
(right
10000
the dynamics and accurately
panel).
0.50
0.25data
The
reproducespoints from
compared to the corresponding phase portrait of the learned dynamics (right panel).
Neural Networks for Data-driven Discovery of Nonlinear time t =the0solution 0.0 arXiv 2preprint arXiv:1801.06637. 0.00

x
to t with= 26.8. InL other
a relative words,Itwe
-error of 6.28e-02. are
should sub-sampling
be emphasized that0.25 from the orig-
the training
Dynamical Systems. arXiv preprint data
inal dataset
are
represented
collected
0.5
onlyby in
in roughly
the the
two-thirds
white training
of the
vertical lines.portion
domain between
of the
The algorithm
times
domain
is thus
t = 0 0.50 t = 26.8
and
from
extrapolating 0.75from time
time t = 0 to
The Lorenz system has a positive Lyapunov exponent, and small di↵er- 1.0 2
t = 26.8
t = 26.8. onwards.
Given The training
0.0the relative
0.2 L -error
data,
0.4 on thewetraining
0.6 areportion of the1.0domain
interested
0.8 in islearning
3.78e-02. N as a
ences between the exact and learned models grow exponentially, even though
the attractor remains High-dimensional
intact. This behavior is evidentPDEs Stochastic t PDEs
in figure 3, as we com- function of the solution u and its derivatives up to the 3rd order ; i.e.,
6
pare the exact versus the predicted trajectories. Small discrepancies due to fectiveness of tour
= 0.25
approach, we solve t =the
0.50learned partial tdi↵erential
= 0.75 equation
1 1 1
finite accuracy in the predicted dynamics lead to large errors in the fore- (7) using the PINNs algorithm [34]. We assume periodic boundary condi-
casted time-series after t > 4, despite the fact that the bi-stable structure of tions and the same initial = N (u,asuxthe
ut condition , uone
xx , u xxxto
used ). generate the original (7)

u(t, x)

u(t, x)

u(t, x)
the attractor is well captured (see figure 2). dataset.0 The resulting solution 0 of the learned partial 0 di↵erential equation as
We represent theexact
well as the solution
solutionu of
bythea 5-layer
KdV equationdeepare neural network
depicted in figurewith 50 neurons
3. This
3.3. Fluid flow behind a cylinder
figure
per hidden indicates
layer.
1 that our algorithm
Furthermore, 1 we let is N capable
to beofa1accurately
neural identifyingwith
network the 2 hidden
In this example we collect data for the fluid flow past a cylinder (see fig- underlying1 partial 2
0 di↵erential
1 equation
1 0 with1 a relative 1 L -error
0 of1 6.28e-02.
ure 4) at Reynolds number 100 using direct numerical simulations of the two layers and 100 neurons
It should be highlighted
x per hidden layer.
that the training
x These two
data are collected
networks
x in roughlyare two-
trained by
dimensional Navier-Stokes equations. In particular, following the problem
setup presented in [23] and [24], we simulate the Navier-Stokes equations de-
minimizing
thirdsthe sum
of the of squared
domain between
Exact errors
times tloss
= 0 of
Prediction andequation band (3).
t =std26.8.
Two To illustrate
The algorithm is the ef-
scribing the two-dimensional fluid flow past a circular cylinder at Reynolds thus extrapolating from timeVariance of u(t,
t = 26.8 x)
onwards. The corresponding relative
2 1.0
number 100 using the Immersed Boundary Projection Method [25, 26]. This L -error on the training portion of the domain is 3.78e-02. 0.8
6 0.5
approach utilizes a multi-domain scheme with four nested domains, each suc- A detailed study of the choice of the order is provided in section0.63.1 for the Burgers’
0.0
x

cessive grid being twice as large as the previous one. Length and time are equation. To test the algorithm even further, let us change the initial condition
0.4 to
non-dimensionalized so that the cylinder has unit diameter and the flow has cos( ⇡x/20)
0.5 and solve the KdV (6) using the conventional spectral 0.2 method
unit velocity. Data is collected on the finest domain with dimensions 9 ⇥ 4 at outlined1.0above. We compare the resulting solution to the one obtained by
a grid resolution of 449 ⇥ 199. The flow solver uses a 3rd-order Runge Kutta
Raissi, M. (2018). Forward-backward stochastic neural
0.0
solving the learned 0.2
partial 0.4
1st order
di↵erential 0.6
2nd
equation 0.8
order(5) using 1.0
3rd order
the PINNs 4th order
algo-
t
rithm [34]. 2 It is worth emphasizing that the algorithm is trained on the
networks: Deep learning of 10 high-dimensional partial Relative
dataset Yang,
Figure
L -error
Y.,
5: Burgers
depicted & equation
1.14e+00
inPerdikaris,
with 3noisy
figure and P. (2019).
data:
is Top:
1.29e-02
being MeanAdversarial
of p✓ (u|x,
tested
3.42e-02
uncertainty
on t,az),di↵erent
along with the
5.54e-02
dataset as
differential equations. arXiv preprint arXiv: location of the training data {(xi , ti ), ui }, i = 1, . . . , Nu . Middle: Prediction and predictive
at tquantification and t =in
Lphysics-informed neural
shown in figure
uncertainty =4. The
0.25, surprising
t = 0.5 0.75.2result
Bottom:reported
Variance ofin figure
t, z).4 strongly indi-
Table 2: Burgers’ equation: Relative -error betweenp✓ solutions
(u|x,
of the Burgers’ equa-
1804.07010. cates that the algorithm is accurately learning the underlying partial di↵er-
networks. Journal of equation
Computational Physics. of the highest order
tion and the learned partial di↵erential as a function
Recent advances
Fractional PDEs
Surrogate modeling & high-dimensional UQ

Zhu,
(a) GRF KLE512, test 1. Y., Zabaras, N., (b)
Koutsourelakis,
GRF KLE512, P. S., & 2.Perdikaris, P. (2019).
test
Figure 1: fPINNs for solving integral, differential, and integro-differential equations. Here we
Physics-constrained deep learning for high-dimensional
Pang, G., Lu, L., & Karniadakis, G. E. (2018). fpinns: Fractional
choose specific integro-differential operators in the form of time- and/or space- fractional deriva- surrogate modeling and uncertainty quantification without
physics-informed neural networks. arXiv preprint arXiv:
tives. fPINNs can incorporate both fractional-order and integer-order operators. In the PDE
shown in the figure, f (·) is a function of operators. The abbreviations “SM” and “AD” represent labeled data. Journal of Computational Physics, 394, 56-81.
1811.08967.
spectral methods and automatic differentiation, respectively.

In this paper, we focus on the NN approaches due to the high expressive power of NNs in
function approximation [24, 25, 26, 27]. In particular, we concentrate on physics-informed neural
Multi-fidelity modeling for stochastic systems
networks (PINNs) [28, 29, 30, 1], which belong to the second aforementioned category. The recent
applications of PINNs include (1) inferring the velocity and pressure fields from the concentra-
10 Integrated software
L. LU, X. MENG, Z. MAO, AND G. E. KARNIADAKIS

tion field of a passive


z ⇠scalar
p(z)in solving the Navier-Stokes equations [31], y)
x, y ⇠ q(x, and= (2)q(y|x)q(x)
identifying the
distributed parameters of stochastic PDEs [21]. However, PINNs, despite their high flexibility, Differential Boundary/initial
y Geometry Neural net
z
cannot
2 be directly applied to the solution of fractional PDEs, because the classical chain rule, equations conditions
which works rather efficiently in forward and backward propagation for NN, is not even valid in
fractional calculus. We could consider a fractional version of chain rule, but it is in the form of
an infinite series, and hence it is computationally prohibitive. To overcome this difficulty here data.PDE or
we propose an alternative method in the y = off✓fractional
form (x, z) PINNs (fPINNs). Specifically, we pro- Training data Model
pose fPINNs for solving integral, differential, and integro-differential equations, and more(c)generally
Channelized, test 1. (d)data.TimePDE
Channelized, test 2.
fPINNs can handle both fractional-order and integer-order operators. We employ the automatic
differentiation technique to analytically derive the integer-order derivatives of NN output, while
we approximate the fractional derivatives numerically using standard methods Figure for8:
thePrediction
numerical examples of the PCS under the mixed Model.train(...,
residual loss. (a) and (b) are
Model.predict(...) Model.compile(...)
discretization of fractional operators;
Latent space z1an illustrative schematic is shown in2Fig.
Physicaltest1. There arexfor
results
space threethe PCS trained with 8192 samples of GRF callbacks=...)
KLE512; (c) and (d) are 2
attractive features of fPINNs.
test results for the PCS trained with 4096 samples of channelized fields.
y = f✓ (x,
(1) They have superior accuracy ⇠ p(z)
z), z for , and
black-box y ⇠noisy
p✓ (y|x, z) terms. When the
forcing
Lu, L., Meng, X., Mao, Z., & Karniadakis, G. E. (2019).
forcing term is simply measured at scattered spatio-temporal points, interpolation has to
Fig. 5. Flowchart of DeepXDE corresponding to Procedure 3.1. The white boxes define the
PDE problem and the training hyperparameters. The blue boxes combine the PDE problem and
Conditional deep surrogate models for stochastic,Varying
Figurebe1:performed
Building using standardsurrogates
probabilistic high-dimensional,
numericalusing
methods
conditional DeepXDE: A deep learning library for solving differential
but this generative
may introduce large We
models: interpolation
assume training hyperparameters in the white boxes. The orange boxes are the three steps (from right to
errors for sparse measurements. In contrast, the number of
fPINNs can bypass the forcing term interpolation training inputs. We train the PCS with di↵erent num-
1-18. from GRF KLE512,equations. arXiv preprint arXiv:1907.04502.
that each observed data pair in the physical space (x, y) is generated by a deterministic
and multi-fidelity systems. Computational Mechanics,
and instead construct the equation residual at these measurement points. Numerical results
nonlinear ber of samples
left) to solve the PDE.
and compare its predictive performance
transformation of the inputs x and a set of latent variables z, i.e. y = f✓ (x, z).
show that fPINNs can achieve higher solution accuracy for sparse measurements for both
This construction generalizes the classical observation model used in regression, namely
forward and inverse problems. Additionally, the noise in the data can be naturally taken into
against the DDS in Fig. 9. From the figure, the relative L2 error decreases
✓ (x) + ✏,by
y = faccount which can beregularization
employing viewed as a techniques,
simplified case
such corresponding
as L1 , L2 and Lto1an additive noise
regularization [32],
Universal ODE -> SInDy
Sparse Identification on only the beta(t) term

2 Reverse-mode
Sparse automatic
Differentiable
Identification on only thedifferentiation
programming
beta(t) term of ODE solutions
for scientific computing
0.0011560597253354426]
The main technical difficulty in training
Replace continuous-depth networks is performing reverse-mode
differentiation (also known as backpropagation)
0.0011560597253354426] Unknown
Portion
through the ODE solver. Differentiating through
the operations of the forward pass is straightforward, but incurs a high memory cost and introduces
additional numerical error. Unknown
Replace Replace
Unknown
Portion Portion
We treat the ODE solver as a black box, and compute gradients using the adjoint sensitivity
method (Pontryagin et al., 1962).
Replace This approach computes gradients by solving a second, aug-
Unknown
mented ODE backwards in time,Portionand is applicable to all ODE solvers. This approach scales linearly
with problem size, has low memory cost, and explicitly controls numerical error.
Consider optimizing a scalar-valued loss function L(), whose input is the result of an ODE solver:
✓ Z t1 ◆
L(z(t1 )) = L z(t0 ) + f (z(t), t, ✓)dt = L (ODESolve(z(t0 ), f, t0 , t1 , ✓)) (3)
t0

To optimize L, we require gradients with respect


and @L Lin et. al. (2020). A conceptual model for the coronavirus disease 2019
@✓ can be computed in a single call to an ODE solver, which concatenates the original state, the
to ✓. and
adjoint, The firstpartial
(COVID-19)
the other step is toindetermining
outbreak Wuhan,
derivatives China
into a single how
with
vector. the 1 shows
individual
Algorithm reaction and
how to construct the
gradient
necessary of theand
dynamics, loss
call depends
an ODE solverontothe hidden
compute state at once.
all gradients
governmental action. International journal of infectious diseases.
z(t) at each instant. This quantity is called the
Algorithm 1 Reverse-mode derivative of an ODE initial value problem
adjoint a(t) = @L/@z(t). Its dynamics are given
Input: dynamics parameters ✓, start time t0 , stop time t1 , final state z(t1 ), loss gradient @L/@z(t1 )
by another ODE,
@L
, 0|✓| ]which can be thought of as the. Define initial augmented state
State s0 = [z(t1 ), @z(t
instantaneous 1)
def aug_dynamics([z(t), analog a(t),of the
·], t, ✓): chain rule: . Define dynamics on augmented state
Adjoint State T @f T @f
return [f (z(t), t, ✓), a(t) @z , a(t) @✓ ] . Compute vector-Jacobian products
@Lda(t) @f (z(t), t, ✓)
a(t)T0 , aug_dynamics, t1 , t0 , ✓) (4) . Solve reverse-time ODE
ODESolve(s
@L
[z(t0 ), @z(t0 ) , @✓ ] ==
return @z(t@L
0)
dt
, @L
@✓
@z . Return gradients
We can compute @L/@z(t0 ) by another call to an
ODE
Most ODEsolver. This
solvers have solver
the option musttherun
to output statebackwards,
z(t) at multiple times. When the loss depends
on these intermediate states, the reverse-mode derivative must be broken into a sequence of separate
starting from the initial value of @L/@z(t
solves, one between each consecutive pair of output times (Figure 1 ). One2). At each observation, the adjoint
Chen,T. Q., Rubanova,Y., Bettencourt, J., & Duvenaud, D. K. (2018).complication
Neural
must ordinary
be adjusted isdifferential
in the that solving
direction of the this ODE
equations.
corresponding requires
In Advances
partial in neural
derivative @L/information
@z(t ). i

Figure 2:
processing Reverse-mode
systems (pp. differentiation of an ODE The
6571-6583). theresults
knowing valuethose
above extend of of
z(t) along
Stapor et al. its entire
(2018, sectiontra-
2.4.2). An extended version of
solution. C.,The
Rackauckas, adjoint
Ma,Y., sensitivity
Martensen, J.,Warner,method
C., Zubov,solves
Algorithm
jectory.
K., Supekar, R., ...
1 including
& However,
Ramadhan,
derivatives
A.we w.r.t.
can
(2020).
and t1 can recompute
t0 simply
Universal
be found in Appendix C. Detailed derivations
Differential Equations forall derivatives for
are provided in Appendix B. Appendix D provides Python code which computes
an augmented ODE backwards in time. The aug- z(t) backwards in time
Scientific Machine Learning. arXiv preprint arXiv:2001.04385. scipy.integrate.odeint together
by extending thewith the adjoint,
autograd automatic differentiation package. This

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy