0% found this document useful (0 votes)
9 views15 pages

Copie de Beltrami

The document provides instructions for installing and using the DeepXDE library for training a Physics-Informed Neural Network (PINN) to solve the Beltrami flow problem. It includes steps for setting up the environment, defining hyperparameters, creating directories for model storage, and formulating the partial differential equations (PDEs) involved. Additionally, it outlines the training process of the model using specific optimizers and loss weights, along with the definition of spatial and temporal domains for the problem.

Uploaded by

fouziachaibi4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views15 pages

Copie de Beltrami

The document provides instructions for installing and using the DeepXDE library for training a Physics-Informed Neural Network (PINN) to solve the Beltrami flow problem. It includes steps for setting up the environment, defining hyperparameters, creating directories for model storage, and formulating the partial differential equations (PDEs) involved. Additionally, it outlines the training process of the model using specific optimizers and loss weights, along with the definition of spatial and temporal domains for the problem.

Uploaded by

fouziachaibi4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

In [ ]:

!pip install deepxde

Collecting deepxde
Downloading DeepXDE-1.13.1-py3-none-any.whl.metadata (12 kB)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.10/dist-packages (from deepxde) (3.7.5)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from deepxde) (1.26.4)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.10/dist-packages (from deepxde) (1.2.2)
Requirement already satisfied: scikit-optimize>=0.9.0 in /usr/local/lib/python3.10/dist-packages (from deepxde) (0.10.2)
Requirement already satisfied: scipy in /usr/local/lib/python3.10/dist-packages (from deepxde) (1.13.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.10/dist-packages (from scikit-optimize>=0.9.0->deepxde) (1.4.2)
Requirement already satisfied: pyaml>=16.9 in /usr/local/lib/python3.10/dist-packages (from scikit-optimize>=0.9.0->deepxde) (25.1.0)
Requirement already satisfied: packaging>=21.3 in /usr/local/lib/python3.10/dist-packages (from scikit-optimize>=0.9.0->deepxde) (24.2
Requirement already satisfied: mkl_fft in /usr/local/lib/python3.10/dist-packages (from numpy->deepxde) (1.3.8)
Requirement already satisfied: mkl_random in /usr/local/lib/python3.10/dist-packages (from numpy->deepxde) (1.2.4)
Requirement already satisfied: mkl_umath in /usr/local/lib/python3.10/dist-packages (from numpy->deepxde) (0.1.1)
Requirement already satisfied: mkl in /usr/local/lib/python3.10/dist-packages (from numpy->deepxde) (2025.0.1)
Requirement already satisfied: tbb4py in /usr/local/lib/python3.10/dist-packages (from numpy->deepxde) (2022.0.0)
Requirement already satisfied: mkl-service in /usr/local/lib/python3.10/dist-packages (from numpy->deepxde) (2.4.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->deepxde) (3.5.0)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->deepxde) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib->deepxde) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->deepxde) (4.55.3)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->deepxde) (1.4.7)
Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->deepxde) (11.0.0)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->deepxde) (3.2.0)
Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib->deepxde) (2.8.2)
Requirement already satisfied: PyYAML in /usr/local/lib/python3.10/dist-packages (from pyaml>=16.9->scikit-optimize>=0.9.0->deepxde) (
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.7->matplotlib->deepxde) (1
Requirement already satisfied: intel-openmp>=2024 in /usr/local/lib/python3.10/dist-packages (from mkl->numpy->deepxde) (2024.2.0)
Requirement already satisfied: tbb==2022.* in /usr/local/lib/python3.10/dist-packages (from mkl->numpy->deepxde) (2022.0.0)
Requirement already satisfied: tcmlib==1.* in /usr/local/lib/python3.10/dist-packages (from tbb==2022.*->mkl->numpy->deepxde) (1.2.0)
Requirement already satisfied: intel-cmplr-lib-rt in /usr/local/lib/python3.10/dist-packages (from mkl_umath->numpy->deepxde) (2024.2.
Requirement already satisfied: intel-cmplr-lib-ur==2024.2.0 in /usr/local/lib/python3.10/dist-packages (from intel-openmp>=2024->mkl->
Downloading DeepXDE-1.13.1-py3-none-any.whl (190 kB)

[2K 
[90m ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━    
[0m [32m190.7/190.7 kB [0m [31m4.9 MB/s [0m eta [36m0:00:00 [  

[?25hInstalling collected packages: deepxde
Successfully installed deepxde-1.13.1

Tout d’abord, nous importons les bibliothèque deepxde, numpy, os, pathlib, time et nous sélectionnons TensorFlow comme
environnement principal pour le traitement des Calculs dans deepxde. Puis nous définissons les Hyperpara- meters de notre code

In [ ]:
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import deepxde as dde
import os
import shutil
from pathlib import Path
import time
# Hyperparameters and domain settings
hidden_layers = 4
hidden_units = 50
number_of_epochs = 30000
learning_rate = 1e-3

domain_points = 50000
boundary_points = 5000
initial_points = 5000
test_points = 1000
test_points_per_dimension = 10

# Domain boundaries
x_min, x_max = -1, 1
y_min, y_max = -1, 1
z_min, z_max = -1, 1
t_min, t_max = 0, 1

# Physical parameters
a = 1
ds = [0.75, 1] # Values of parameter d
Re = 1 # Reynolds number

Création des répertoires : Cette partie sera Supprimer l’ancien répertoire pour cette valeur de d et en crée un nouveau pour stocker les
résultat

In [ ]:
def train_source_model(d):
"""
Train a PINN model for the Beltrami flow with a given parameter d.
"""
# Create directory for the model
path = Path('Neural_Networks', 'd_{}'.format(d))
if path.exists() and path.is_dir():
shutil.rmtree(path)
os.makedirs(path)

Définition de l’équation différentielle partielle (EDP) : Maintenant nous déffinissons notre équation aux dérivées partielles, le premier
argument x de la fonction pde est Un vecteur quadridimensionnel. avec la premiére composante (x[:, 0]) est la coordonnée x, la deuxiéme
composante (x[:, 1]) est la coordonnée y, la troisiéme composante (x[:, 2]) est la coordonnée z et la quatriéme composante (x[:, 3]) est le
coordonnée t. Le deuxiéme argument u est la sortie du réseau neuronal, c’est-à-dire la solution u(x, t).

In [ ]:
# Define the PDE system for the Beltrami flow
def pde(x, u):
u_vel, v_vel, w_vel, p = u[:, 0:1], u[:, 1:2], u[:, 2:3], u[:, 3:4]

# Compute gradients and second derivatives for u, v, w


u_vel_x = dde.grad.jacobian(u, x, i=0, j=0)
u_vel_y = dde.grad.jacobian(u, x, i=0, j=1)
u_vel_z = dde.grad.jacobian(u, x, i=0, j=2)
u_vel_t = dde.grad.jacobian(u, x, i=0, j=3)
u_vel_xx = dde.grad.hessian(u, x, component=0, i=0, j=0)
u_vel_yy = dde.grad.hessian(u, x, component=0, i=1, j=1)
u_vel_zz = dde.grad.hessian(u, x, component=0, i=2, j=2)

v_vel_x = dde.grad.jacobian(u, x, i=1, j=0)


v_vel_y = dde.grad.jacobian(u, x, i=1, j=1)
v_vel_z = dde.grad.jacobian(u, x, i=1, j=2)
v_vel_t = dde.grad.jacobian(u, x, i=1, j=3)
v_vel_xx = dde.grad.hessian(u, x, component=1, i=0, j=0)
v_vel_yy = dde.grad.hessian(u, x, component=1, i=1, j=1)
v_vel_zz = dde.grad.hessian(u, x, component=1, i=2, j=2)

w_vel_x = dde.grad.jacobian(u, x, i=2, j=0)


w_vel_y = dde.grad.jacobian(u, x, i=2, j=1)
w_vel_z = dde.grad.jacobian(u, x, i=2, j=2)
w_vel_t = dde.grad.jacobian(u, x, i=2, j=3)
w_vel_xx = dde.grad.hessian(u, x, component=2, i=0, j=0)
w_vel_yy = dde.grad.hessian(u, x, component=2, i=1, j=1)
w_vel_zz = dde.grad.hessian(u, x, component=2, i=2, j=2)

p_x = dde.grad.jacobian(u, x, i=3, j=0)


p_y = dde.grad.jacobian(u, x, i=3, j=1)
p_z = dde.grad.jacobian(u, x, i=3, j=2)

# Momentum equations and continuity equation


momentum_x = u_vel_t + (u_vel * u_vel_x + v_vel * u_vel_y + w_vel * u_vel_z) + p_x - 1 / Re *
momentum_y = v_vel_t + (u_vel * v_vel_x + v_vel * v_vel_y + w_vel * v_vel_z) + p_y - 1 / Re *
momentum_z = w_vel_t + (u_vel * w_vel_x + v_vel * w_vel_y + w_vel * w_vel_z) + p_z - 1 / Re *
continuity = u_vel_x + v_vel_y + w_vel_z

return [momentum_x, momentum_y, momentum_z, continuity]

Solutions analytiques exactes : Puis nous déffinissons les solutions exactes pour les composantes de vitesse u, v, w et la pression p,
basées sur la solution analytique. L’argument x (vecteur de quatre dimension) de u_func , v_func, w_func et p_func est l’entrée du réseau,
les fonctions u_func , v_func, w_func et p_func renvoie simplement les valeurs de fonction correspondantes à partir de la vecteur x
donné.

In [ ]:
# Exact solutions for u, v, w, and p
def u_func(x):
return -a * (np.exp(a * x[:, 0:1]) * np.sin(a * x[:, 1:2] + d * x[:, 2:3]) + np.exp(a * x[:,

def v_func(x):
return -a * (np.exp(a * x[:, 1:2]) * np.sin(a * x[:, 2:3] + d * x[:, 0:1]) + np.exp(a * x[:,

def w_func(x):
return -a * (np.exp(a * x[:, 2:3]) * np.sin(a * x[:, 0:1] + d * x[:, 1:2]) + np.exp(a * x[:,

def p_func(x):
return -1 / 2 * a**2 * (np.exp(2 * a * x[:, 0:1]) + np.exp(2 * a * x[:, 1:2]) + np.exp(2 * a
Définition du domaine spatial et temporel : Ensuite, nous définissons le domaine de calcul 3D cubique (x, y, z) et le combine avec une
dimension temporelle t. Nous pouvons utiliser le module de géométrie comme suit :

In [ ]:
# Define the spatial and temporal domains
spatial_domain = dde.geometry.geometry_3d.Cuboid(xmin=[x_min, y_min, z_min], xmax=[x_max, y_max,
temporal_domain = dde.geometry.TimeDomain(t_min, t_max)
spatio_temporal_domain = dde.geometry.GeometryXTime(spatial_domain, temporal_domain)

Conditions initiales et aux limites :

In [ ]:
# Boundary and initial conditions
boundary_condition_u = dde.DirichletBC(spatio_temporal_domain, u_func, lambda _, on_boundary: on_
boundary_condition_v = dde.DirichletBC(spatio_temporal_domain, v_func, lambda _, on_boundary: on_
boundary_condition_w = dde.DirichletBC(spatio_temporal_domain, w_func, lambda _, on_boundary: on_

initial_condition_u = dde.IC(spatio_temporal_domain, u_func, lambda _, on_initial: on_initial, co


initial_condition_v = dde.IC(spatio_temporal_domain, v_func, lambda _, on_initial: on_initial, co
initial_condition_w = dde.IC(spatio_temporal_domain, w_func, lambda _, on_initial: on_initial, co

Problème PDE et réseau de neurones :

In [ ]:
# Define the PDE problem
data = dde.data.TimePDE(
spatio_temporal_domain,
pde,
[boundary_condition_u, boundary_condition_v, boundary_condition_w, initial_condition_u, initi
num_domain=domain_points,
num_boundary=boundary_points,
num_initial=initial_points,
num_test=test_points
)

Puis nous définissons l’architecture du réseau de neurones par l’appelant la class ”dde.maps.F NN”. Et ensuite la fonction ”model” est
combine le pro- blème et le réseau.

In [ ]:
# Define the neural network model
model_name = 'Neural_Networks/d_{}/Beltrami_Flow_Source_Model_d_{}'.format(d, d)
net = dde.maps.FNN([4] + hidden_layers * [hidden_units] + [4], "tanh", "Glorot normal")
model = dde.Model(data, net)

Maintenant, nous définissons l’entraînement du modèle en utilisant deux étapes d’optimisation : d’abord avec l’optimiseur "adam", puis
avec "L-BFGS-B". Chaque étape utilise des pondérations spécifiques pour les pertes. Nous mesu- rons également la durée totale de
l’entraînement.

In [ ]:
# Train the model
start = time.time()
model.compile("adam", lr=learning_rate, loss_weights=[1, 1, 1, 1, 100, 100, 100, 100, 100, 100])
model.train(epochs=number_of_epochs)
model.compile("L-BFGS-B", loss_weights=[1, 1, 1, 1, 100, 100, 100, 100, 100, 100])
losshistory, train_state = model.train(model_save_path=model_name)
end = time.time()
length = end - start

Dans cette section, nous générons un maillage 3D de points de test à l’aide de " np.meshgrid ", puis préparons les entrées pour le
modèle. Nous définissons des valeurs pour t0 et t1, et les concaténons avec les coordonnées X pour créer des ensembles de données de
test. Nous effectuons des prédictions du modèle pour ces ensembles et extrayons les résultats de la prédiction ainsi que les solutions
exactes à partir des fonctions u_func, v_func, w_func, et p_func.

In [ ]:
# Test the model
x, y, z = np.meshgrid(
np.linspace(x_min, x_max, test_points_per_dimension),
np.linspace(y_min, y_max, test_points_per_dimension),
np.linspace(z_min, z_max, test_points_per_dimension),
)
X = np.vstack((np.ravel(x), np.ravel(y), np.ravel(z))).T

t_0 = t_min * np.ones(test_points).reshape(test_points, 1)


t_1 = t_max * np.ones(test_points).reshape(test_points, 1)
X_0 = np.hstack((X, t_0))
X_1 = np.hstack((X, t_1))

output_0 = model.predict(X_0)
output_1 = model.predict(X_1)

# Extract predictions and exact solutions


u_pred_0, v_pred_0, w_pred_0, p_pred_0 = output_0[:, 0], output_0[:, 1], output_0[:, 2], output_0
u_exact_0, v_exact_0, w_exact_0, p_exact_0 = u_func(X_0).reshape(-1), v_func(X_0).reshape(-1), w_

Puis nous calculons les résidus et les différences L2 entre les prédictions du modèle et les solutions exactes. Nous utilisons la méthode
‘predict‘ du modèle pour obtenir les résidus f0 et f1 pour les deux ensembles de données. Ensuite, nous calculons l’erreur relative L2 pour
chaque variable (u, v, w, p) à l’aide de " dde.metrics.l2_relative_error ". Nous calculons également la moyenne des résidus pour chaque
ensemble. Enfin, nous renvoyons les différences L2, les résidus, le temps d’exécution et le nombre d’époques finales d’entraînement.

In [ ]:
# Compute residuals and L2 differences
f_0 = model.predict(X_0, operator=pde)
f_1 = model.predict(X_1, operator=pde)

l2_difference_u_0 = dde.metrics.l2_relative_error(u_exact_0, u_pred_0)


l2_difference_v_0 = dde.metrics.l2_relative_error(v_exact_0, v_pred_0)
l2_difference_w_0 = dde.metrics.l2_relative_error(w_exact_0, w_pred_0)
l2_difference_p_0 = dde.metrics.l2_relative_error(p_exact_0, p_pred_0)
residual_0 = np.mean(np.absolute(f_0))

l2_difference_u_1 = dde.metrics.l2_relative_error(u_exact_1, u_pred_1)


l2_difference_v_1 = dde.metrics.l2_relative_error(v_exact_1, v_pred_1)
l2_difference_w_1 = dde.metrics.l2_relative_error(w_exact_1, w_pred_1)
l2_difference_p_1 = dde.metrics.l2_relative_error(p_exact_1, p_pred_1)
residual_1 = np.mean(np.absolute(f_1))

final_epochs = train_state.epoch

return l2_difference_u_0, l2_difference_v_0, l2_difference_w_0, l2_difference_p_0, residual_0, l2

Boucle principale :

Dans cette partie, nous initialisons des tableaux pour stocker les résultats des différences L2, des résidus, des temps d’entraînement et
des époques pour chaque valeur de d. Ensuite, nous créons les répertoires nécessaires pour sau- vegarder les résultats et les modèles.
Après avoir supprimé les anciens répertoires existants, nous entraînons le mo- dèle pour chaque valeur de d et enregistrons les résultats
dans les tableaux. Finalement, les résultats sont imprimés et sauvegardés dans des fichiers CSV dans les répertoires respectifs, incluant
les résidus, les différences L2, les temps et les époques d’entraînement.

In [ ]:
### Main file ###
if __name__ == "__main__":
# Initialize arrays to store results
l2_differences_u_0 = np.zeros(len(ds))
l2_differences_v_0 = np.zeros(len(ds))
l2_differences_w_0 = np.zeros(len(ds))
l2_differences_p_0 = np.zeros(len(ds))
residuals_0 = np.zeros(len(ds))
l2_differences_u_1 = np.zeros(len(ds))
l2_differences_v_1 = np.zeros(len(ds))
l2_differences_w_1 = np.zeros(len(ds))
l2_differences_p_1 = np.zeros(len(ds))
residuals_1 = np.zeros(len(ds))
times = np.zeros(len(ds))
epochs = np.zeros(len(ds))

# Create directories for results and models


directory_1 = Path('Neural_Networks')
directory_2 = Path('Results')

if directory_1.exists() and directory_1.is_dir():


shutil.rmtree(directory_1)
if directory_2.exists() and directory_2.is_dir():
shutil.rmtree(directory_2)

os.makedirs(directory_2)
# Train models for each value of d
for i, d in enumerate(ds):
l2_differences_u_0[i], l2_differences_v_0[i], l2_differences_w_0[i], l2_differences_p_0[i], r

# Print results
print("At the beginning: ")
print("Residuals: ", residuals_0)
print("Relative L2 Difference u: ", l2_differences_u_0)
print("Relative L2 Difference v: ", l2_differences_v_0)
print("Relative L2 Difference w: ", l2_differences_w_0)
print("Relative L2 Difference p: ", l2_differences_p_0)
print("\n")
print("In the end: ")
print("Residuals: ", residuals_1)
print("Relative L2 Difference u: ", l2_differences_u_1)
print("Relative L2 Difference v: ", l2_differences_v_1)
print("Relative L2 Difference w: ", l2_differences_w_1)
print("Relative L2 Difference p: ", l2_differences_p_1)
print("\n")
print("Training times: ", times)

# Save results to CSV files


np.savetxt("Results/residuals_0.csv", residuals_0, delimiter=",")
np.savetxt("Results/l2_differences_u_0.csv", l2_differences_u_0, delimiter=",")
np.savetxt("Results/l2_differences_v_0.csv", l2_differences_v_0, delimiter=",")
np.savetxt("Results/l2_differences_w_0.csv", l2_differences_w_0, delimiter=",")
np.savetxt("Results/l2_differences_p_0.csv", l2_differences_p_0, delimiter=",")
np.savetxt("Results/residuals_1.csv", residuals_1, delimiter=",")
np.savetxt("Results/l2_differences_u_1.csv", l2_differences_u_1, delimiter=",")
np.savetxt("Results/l2_differences_v_1.csv", l2_differences_v_1, delimiter=",")
np.savetxt("Results/l2_differences_w_1.csv", l2_differences_w_1, delimiter=",")
np.savetxt("Results/l2_differences_p_1.csv", l2_differences_p_1, delimiter=",")
np.savetxt("Results/times.csv", times, delimiter=",")
np.savetxt("Neural_Networks/epochs.csv", epochs, delimiter=",")

In [ ]:
"""Backend supported: tensorflow.compat.v1, tensorflow, pytorch, paddle"""
import deepxde as dde
import numpy as np

a = 1
d = 1
Re = 1

def pde(x, u):


u_vel, v_vel, w_vel, p = u[:, 0:1], u[:, 1:2], u[:, 2:3], u[:, 3:4]

u_vel_x = dde.grad.jacobian(u, x, i=0, j=0)


u_vel_y = dde.grad.jacobian(u, x, i=0, j=1)
u_vel_z = dde.grad.jacobian(u, x, i=0, j=2)
u_vel_t = dde.grad.jacobian(u, x, i=0, j=3)
u_vel_xx = dde.grad.hessian(u, x, component=0, i=0, j=0)
u_vel_yy = dde.grad.hessian(u, x, component=0, i=1, j=1)
u_vel_zz = dde.grad.hessian(u, x, component=0, i=2, j=2)

v_vel_x = dde.grad.jacobian(u, x, i=1, j=0)


v_vel_y = dde.grad.jacobian(u, x, i=1, j=1)
v_vel_z = dde.grad.jacobian(u, x, i=1, j=2)
v_vel_t = dde.grad.jacobian(u, x, i=1, j=3)
v_vel_xx = dde.grad.hessian(u, x, component=1, i=0, j=0)
v_vel_yy = dde.grad.hessian(u, x, component=1, i=1, j=1)
v_vel_zz = dde.grad.hessian(u, x, component=1, i=2, j=2)

w_vel_x = dde.grad.jacobian(u, x, i=2, j=0)


w_vel_y = dde.grad.jacobian(u, x, i=2, j=1)
w_vel_z = dde.grad.jacobian(u, x, i=2, j=2)
w_vel_t = dde.grad.jacobian(u, x, i=2, j=3)
w_vel_xx = dde.grad.hessian(u, x, component=2, i=0, j=0)
w_vel_yy = dde.grad.hessian(u, x, component=2, i=1, j=1)
w_vel_zz = dde.grad.hessian(u, x, component=2, i=2, j=2)

p_x = dde.grad.jacobian(u, x, i=3, j=0)


p_y = dde.grad.jacobian(u, x, i=3, j=1)
p_z = dde.grad.jacobian(u, x, i=3, j=2)

momentum_x = (
u_vel_t
+ (u_vel * u_vel_x + v_vel * u_vel_y + w_vel * u_vel_z)
+ p_x
- 1 / Re * (u_vel_xx + u_vel_yy + u_vel_zz)
)
momentum_y = (
v_vel_t
+ (u_vel * v_vel_x + v_vel * v_vel_y + w_vel * v_vel_z)
+ p_y
- 1 / Re * (v_vel_xx + v_vel_yy + v_vel_zz)
)
momentum_z = (
w_vel_t
+ (u_vel * w_vel_x + v_vel * w_vel_y + w_vel * w_vel_z)
+ p_z
- 1 / Re * (w_vel_xx + w_vel_yy + w_vel_zz)
)
continuity = u_vel_x + v_vel_y + w_vel_z

return [momentum_x, momentum_y, momentum_z, continuity]

def u_func(x):
return (
-a
* (
np.exp(a * x[:, 0:1]) * np.sin(a * x[:, 1:2] + d * x[:, 2:3])
+ np.exp(a * x[:, 2:3]) * np.cos(a * x[:, 0:1] + d * x[:, 1:2])
)
* np.exp(-(d ** 2) * x[:, 3:4])
)

def v_func(x):
return (
-a
* (
np.exp(a * x[:, 1:2]) * np.sin(a * x[:, 2:3] + d * x[:, 0:1])
+ np.exp(a * x[:, 0:1]) * np.cos(a * x[:, 1:2] + d * x[:, 2:3])
)
* np.exp(-(d ** 2) * x[:, 3:4])
)

def w_func(x):
return (
-a
* (
np.exp(a * x[:, 2:3]) * np.sin(a * x[:, 0:1] + d * x[:, 1:2])
+ np.exp(a * x[:, 1:2]) * np.cos(a * x[:, 2:3] + d * x[:, 0:1])
)
* np.exp(-(d ** 2) * x[:, 3:4])
)

def p_func(x):
return (
-0.5
* a ** 2
* (
np.exp(2 * a * x[:, 0:1])
+ np.exp(2 * a * x[:, 1:2])
+ np.exp(2 * a * x[:, 2:3])
+ 2
* np.sin(a * x[:, 0:1] + d * x[:, 1:2])
* np.cos(a * x[:, 2:3] + d * x[:, 0:1])
* np.exp(a * (x[:, 1:2] + x[:, 2:3]))
+ 2
* np.sin(a * x[:, 1:2] + d * x[:, 2:3])
* np.cos(a * x[:, 0:1] + d * x[:, 1:2])
* np.exp(a * (x[:, 2:3] + x[:, 0:1]))
+ 2
* np.sin(a * x[:, 2:3] + d * x[:, 0:1])
* np.cos(a * x[:, 1:2] + d * x[:, 2:3])
* np.exp(a * (x[:, 0:1] + x[:, 1:2]))
)
* np.exp(-2 * d ** 2 * x[:, 3:4])
)

spatial_domain = dde.geometry.Cuboid(xmin=[-1, -1, -1], xmax=[1, 1, 1])


temporal_domain = dde.geometry.TimeDomain(0, 1)
spatio_temporal_domain = dde.geometry.GeometryXTime(spatial_domain, temporal_domain)

boundary_condition_u = dde.icbc.DirichletBC(
spatio_temporal_domain, u_func, lambda _, on_boundary: on_boundary, component=0
)
boundary_condition_v = dde.icbc.DirichletBC(
spatio_temporal_domain, v_func, lambda _, on_boundary: on_boundary, component=1
)
boundary_condition_w = dde.icbc.DirichletBC(
spatio_temporal_domain, w_func, lambda _, on_boundary: on_boundary, component=2
)

initial_condition_u = dde.icbc.IC(
spatio_temporal_domain, u_func, lambda _, on_initial: on_initial, component=0
)
initial_condition_v = dde.icbc.IC(
spatio_temporal_domain, v_func, lambda _, on_initial: on_initial, component=1
)
initial_condition_w = dde.icbc.IC(
spatio_temporal_domain, w_func, lambda _, on_initial: on_initial, component=2
)

data = dde.data.TimePDE(
spatio_temporal_domain,
pde,
[
boundary_condition_u,
boundary_condition_v,
boundary_condition_w,
initial_condition_u,
initial_condition_v,
initial_condition_w,
],
num_domain=50000,
num_boundary=5000,
num_initial=5000,
num_test=10000,
)

net = dde.nn.FNN([4] + 4 * [50] + [4], "tanh", "Glorot normal")

model = dde.Model(data, net)

model.compile("adam", lr=1e-3, loss_weights=[1, 1, 1, 1, 100, 100, 100, 100, 100, 100])


model.train(iterations=1000)
model.compile("L-BFGS", loss_weights=[1, 1, 1, 1, 100, 100, 100, 100, 100, 100])
losshistory, train_state = model.train()

x, y, z = np.meshgrid(
np.linspace(-1, 1, 10), np.linspace(-1, 1, 10), np.linspace(-1, 1, 10)
)
X = np.vstack((np.ravel(x), np.ravel(y), np.ravel(z))).T

t_0 = np.zeros(1000).reshape(1000, 1)
t_1 = np.ones(1000).reshape(1000, 1)

X_0 = np.hstack((X, t_0))


X_1 = np.hstack((X, t_1))

output_0 = model.predict(X_0)
output_1 = model.predict(X_1)

u_pred_0 = output_0[:, 0].reshape(-1)


v_pred_0 = output_0[:, 1].reshape(-1)
w_pred_0 = output_0[:, 2].reshape(-1)
p_pred_0 = output_0[:, 3].reshape(-1)

u_exact_0 = u_func(X_0).reshape(-1)
v_exact_0 = v_func(X_0).reshape(-1)
w_exact_0 = w_func(X_0).reshape(-1)
p_exact_0 = p_func(X_0).reshape(-1)

u_pred_1 = output_1[:, 0].reshape(-1)


v_pred_1 = output_1[:, 1].reshape(-1)
w_pred_1 = output_1[:, 2].reshape(-1)
p_pred_1 = output_1[:, 3].reshape(-1)

u_exact_1 = u_func(X_1).reshape(-1)
v_exact_1 = v_func(X_1).reshape(-1)
w_exact_1 = w_func(X_1).reshape(-1)
p_exact_1 = p_func(X_1).reshape(-1)

f_0 = model.predict(X_0, operator=pde)


f_1 = model.predict(X_1, operator=pde)

l2_difference_u_0 = dde.metrics.l2_relative_error(u_exact_0, u_pred_0)


l2_difference_v_0 = dde.metrics.l2_relative_error(v_exact_0, v_pred_0)
l2_difference_w_0 = dde.metrics.l2_relative_error(w_exact_0, w_pred_0)
l2_difference_p_0 = dde.metrics.l2_relative_error(p_exact_0, p_pred_0)
residual_0 = np.mean(np.absolute(f_0))

l2_difference_u_1 = dde.metrics.l2_relative_error(u_exact_1, u_pred_1)


l2_difference_v_1 = dde.metrics.l2_relative_error(v_exact_1, v_pred_1)
l2_difference_w_1 = dde.metrics.l2_relative_error(w_exact_1, w_pred_1)
l2_difference_p_1 = dde.metrics.l2_relative_error(p_exact_1, p_pred_1)
residual_1 = np.mean(np.absolute(f_1))

print("Accuracy at t = 0:")
print("Mean residual:", residual_0)
print("L2 relative error in u:", l2_difference_u_0)
print("L2 relative error in v:", l2_difference_v_0)
print("L2 relative error in w:", l2_difference_w_0)
print("\n")
print("Accuracy at t = 1:")
print("Mean residual:", residual_1)
print("L2 relative error in u:", l2_difference_u_1)
print("L2 relative error in v:", l2_difference_v_1)
print("L2 relative error in w:", l2_difference_w_1)

No backend selected.
Finding available backend...

Using backend: tensorflow.compat.v1


Other supported backends: tensorflow, pytorch, jax, paddle.
paddle supports more examples now and is recommended.

Found tensorflow.compat.v1
Setting the default backend to "tensorflow.compat.v1". You can change it in the ~/.deepxde/config.json file or export the DDE_BACKEND

Enable just-in-time compilation with XLA.

Warning: 283 points required, but 343 points sampled.


Warning: 10000 points required, but 12348 points sampled.
Compiling model...
Building feed-forward neural network...
'build' took 0.083144 s
'compile' took 3.056917 s

Training model...

Step Train loss Test loss


0 [4.55e-02, 3.83e-01, 1.10e-01, 2.08e-02, 9.53e+01, 1.35e+02, 1.61e+02, 1.72e+02, 2.32e+02, 2.52e+02] [4.47e-02, 3.92e-01,
1000 [2.88e-01, 2.98e-01, 3.60e-01, 1.43e-02, 2.26e-01, 2.40e-01, 2.27e-01, 1.02e-01, 1.24e-01, 1.01e-01] [1.15e-01, 9.81e-02,

Best model at step 1000:


train loss: 1.98e+00
test loss: 1.36e+00
test metric: []

'train' took 258.589343 s

Compiling model...
'compile' took 1.354819 s

Training model...

Step Train loss Test loss


1000 [2.88e-01, 2.98e-01, 3.60e-01, 1.43e-02, 2.26e-01, 2.40e-01, 2.27e-01, 1.02e-01, 1.24e-01, 1.01e-01] [1.15e-01, 9.81e-02,
1016 [2.88e-01, 2.98e-01, 3.60e-01, 1.43e-02, 2.26e-01, 2.40e-01, 2.27e-01, 1.02e-01, 1.24e-01, 1.01e-01] [1.15e-01, 9.81e-02,

Best model at step 1000:


train loss: 1.98e+00
test loss: 1.36e+00
test metric: []

'train' took 11.829734 s

Accuracy at t = 0:
Mean residual: 0.79304934
L2 relative error in u: 0.0360756548422574
L2 relative error in v: 0.04037136449286261
L2 relative error in w: 0.03854013579661913

Accuracy at t = 1:
Mean residual: 0.21886893
L2 relative error in u: 0.09195743919918441
L2 relative error in v: 0.08259364457246625
L2 relative error in w: 0.08577550842688736

In [ ]:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np

matplotlib.pyplot : pour tracer les graphiques.

mpl_toolkits.mplot3d.Axes3D : pour générer des graphes en 3D.

numpy : pour la manipulation des tableaux numériques.

In [ ]:
x_min, x_max = -1, 1
y_min, y_max = -1, 1
z_min, z_max = -1, 1
t_min, t_max = 0, 1

Ces variables définissent les limites de l'espace et du temps .

L'écoulement est étudié sur un cube et un intervalle temporel .

In [ ]:
def plot_wireframes(x, y, predicted, exact, titles, xlabel='location x'
, ylabel='location y', zlabel='Value'):

num_components = len(predicted)
fig = plt.figure(figsize=(16, 10)) # Ajuster la taille de la figure

for i, (pred, exact_sol, title) in enumerate(zip(predicted, exact, titles)):


# Subplot pour la solution prédite
ax1 = fig.add_subplot(2, num_components, i + 1, projection='3d')
plot_surface(ax1, x, y, pred, title + " (Prédite)", 'blue', xlabel
, ylabel, zlabel)

# Subplot pour la solution exacte


ax2 = fig.add_subplot(2, num_components, i + 1 + num_components
, projection='3d')
plot_surface(ax2, x, y, exact_sol, title + " (Exacte)", 'red', xlabel
, ylabel, zlabel)

plt.tight_layout()
plt.show()

Cette fonction trace les solutions prédite et exacte pour chaque composante de vitesse (u,v,w).
Elle crée une figure principale et des sous-graphiques en 3D :

Ligne du haut : solutions prédites.

Ligne du bas : solutions exactes.

Elle utilise la fonction plot_surface pour tracer chaque graphe


In [ ]:
def plot_surface(ax, x, y, component, title, color, xlabel, ylabel, zlabel):

# Création du maillage
x_unique = np.unique(x)
y_unique = np.unique(y)
X_grid, Y_grid = np.meshgrid(x_unique, y_unique)

# Reshape des données en grille


Z_grid = np.zeros_like(X_grid)
for j in range(len(x)):
xi, yi = np.where((X_grid == x[j]) & (Y_grid == y[j]))
if xi.size > 0 and yi.size > 0:
Z_grid[xi, yi] = component[j]

ax.plot_wireframe(X_grid, Y_grid, Z_grid, color=color)


ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_zlabel(zlabel)

Convertit les données en une grille 2D pour le tracé des courbes.

Crée un maillage - avec np.meshgrid.

Remplit la grille avec les valeurs de la composante courante (u,v,w).

Trace une surface en fil de fer (plot_wireframe) en couleur (color).

In [ ]:
x, y, z = np.meshgrid(
np.linspace(-1, 1, 10), np.linspace(-1, 1, 10), np.linspace(-1, 1, 10)
)
X = np.vstack((np.ravel(x), np.ravel(y), np.ravel(z))).T

np.meshgrid génère une grille 3D de 10×10×10 points dans l'espace.

np.ravel(x), np.ravel(y), np.ravel(z) transforment la grille en vecteurs 1D.

np.vstack(...).T crée une matrice de points 3D .

In [ ]:
t_0 = np.zeros(1000).reshape(1000, 1)
X_0 = np.hstack((X, t_0))

t_0 est un vecteur de 1000 zéros, représentant t=0 .

X_0 est une matrice (x,y,z,t=0) , utilisée pour l'évaluation.

In [ ]:
u_pred = u_pred_0 # = u_pred_1 (pour t=1)
v_pred = v_pred_0 # = v_pred_1 (pour t=1)
w_pred = w_pred_0 # = w_pred_1 (pour t=1)

u_exact = u_exact_0 # = u_exact_1 (pour t=1)


v_exact = v_exact_0 # = v_exact_1 (pour t=1)
w_exact = w_exact_0 # = w_exact_1 (pour t=1)

u_pred, v_pred, w_pred : solutions prédites par un modèle (ex. PINN).

u_exact, v_exact, w_exact : solutions exactes du Beltrami flow.

In [ ]:
plot_wireframes(
x, y,
predicted=[u_pred, v_pred, w_pred],
exact=[u_exact, v_exact, w_exact],
titles=["u Component", "v Component", "w Component"],
xlabel="Location x",
ylabel="Location y",
zlabel="Value"
)

Traces des résultats

In [ ]:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np

def plot_wireframes(x, y, components, titles, xlabel='location x', ylabel='location y', zlabel='Value

num_components = len(components)
fig = plt.figure(figsize=(16, 5)) # Adjust the figure size for multiple subplots

for i, (component, title) in enumerate(zip(components, titles), start=1):


ax = fig.add_subplot(1, num_components, i, projection='3d')

# Create the grid for wireframe


x_unique = np.unique(x)
y_unique = np.unique(y)
X_grid, Y_grid = np.meshgrid(x_unique, y_unique)

# Reshape the component data to match the grid


Z_grid = np.zeros_like(X_grid)
for j in range(len(x)):
xi, yi = np.where((X_grid == x[j]) & (Y_grid == y[j]))
if xi.size > 0 and yi.size > 0:
Z_grid[xi, yi] = component[j]

# Plot the wireframe for the current component


ax.plot_wireframe(X_grid, Y_grid, Z_grid, color='blue')
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_zlabel(zlabel)

plt.tight_layout()
plt.show()

# Exemple de données : remplacer avec vos vraies données


x = X_0[:, 0] # Coordonnées x
y = X_0[:, 1] # Coordonnées y

# Les trois composantes u, v et w (remplacer par vos solutions calculées)


u_solution = u_pred_1 # Prédictions pour u
v_solution = v_pred_1 # Prédictions pour v
w_solution = w_pred_1 # Prédictions pour w

# Appeler la fonction pour afficher les trois composants


plot_wireframes(
x, y,
components=[u_solution, v_solution, w_solution],
titles=["u Component", "v Component", "w Component"],
xlabel="location x",
ylabel="location y",
zlabel="Value"
)
In [ ]:

import matplotlib.pyplot as plt


from mpl_toolkits.mplot3d import Axes3D
import numpy as np
x_min, x_max = -1, 1
y_min, y_max = -1, 1
z_min, z_max = -1, 1
t_min, t_max = 0, 1
def plot_wireframes(x, y, predicted, exact, titles, xlabel='location x'
, ylabel='location y', zlabel='Value'):

num_components = len(predicted)
fig = plt.figure(figsize=(16, 10)) # Ajuster la taille de la figure

for i, (pred, exact_sol, title) in enumerate(zip(predicted, exact, titles)):


# Subplot pour la solution prédite
ax1 = fig.add_subplot(2, num_components, i + 1, projection='3d')
plot_surface(ax1, x, y, pred, title + " (Prédite)", 'blue', xlabel
, ylabel, zlabel)

# Subplot pour la solution exacte


ax2 = fig.add_subplot(2, num_components, i + 1 + num_components
, projection='3d')
plot_surface(ax2, x, y, exact_sol, title + " (Exacte)", 'red', xlabel
, ylabel, zlabel)

plt.tight_layout()
plt.show()

def plot_surface(ax, x, y, component, title, color, xlabel, ylabel, zlabel):

# Création du maillage
x_unique = np.unique(x)
y_unique = np.unique(y)
X_grid, Y_grid = np.meshgrid(x_unique, y_unique)

# Reshape des données en grille


Z_grid = np.zeros_like(X_grid)
for j in range(len(x)):
xi, yi = np.where((X_grid == x[j]) & (Y_grid == y[j]))
if xi.size > 0 and yi.size > 0:
Z_grid[xi, yi] = component[j]

ax.plot_wireframe(X_grid, Y_grid, Z_grid, color=color)


ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_zlabel(zlabel)

# Génération des points d'évaluation

x, y, z = np.meshgrid(
np.linspace(-1, 1, 10), np.linspace(-1, 1, 10), np.linspace(-1, 1, 10)
)
X = np.vstack((np.ravel(x), np.ravel(y), np.ravel(z))).T
t_0 = np.zeros(1000).reshape(1000, 1)
X_0 = np.hstack((X, t_0))

# Extraction des coordonnées


x = X_0[:, 0]
y = X_0[:, 1]

# Solutions prédites (remplacer par vos prédictions réelles)

u_pred = u_pred_0 # = u_pred_1 (pour t=1)


v_pred = v_pred_0 # = v_pred_1 (pour t=1)
w_pred = w_pred_0 # = w_pred_1 (pour t=1)

# Solutions exactes (remplacer par les valeurs exactes du Beltrami flow)

u_exact = u_exact_0 # = u_exact_1 (pour t=1)


v_exact = v_exact_0 # = v_exact_1 (pour t=1)
w_exact = w_exact_0 # = w_exact_1 (pour t=1)

# Affichage des solutions


plot_wireframes(
x, y,
predicted=[u_pred, v_pred, w_pred],
exact=[u_exact, v_exact, w_exact],
titles=["u Component", "v Component", "w Component"],
xlabel="Location x",
ylabel="Location y",
zlabel="Value"
)

In [ ]:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # Pour les graphiques 3D

# Paramètres
d_values = [0.75, 1] # Valeurs de d
t = 0 # Instant t
z_value = 0 # Position z fixée à 0

# Créer une grille pour x, y


x = np.linspace(0, 1, 20) # Réduire le nombre de points pour une meilleure visibilité
y = np.linspace(0, 1, 20)
x, y = np.meshgrid(x, y)
# Paramètre a
a = 1

# Calculer v(x, y, z, t) pour d = 0.75 et t = 0


v_075 = -a * (np.exp(a * y) * np.sin(a * x + d_values[0] * y) + np.exp(a * x) * np.cos(a * y + d_valu

# Calculer w(x, y, z, t) pour d = 0.75 et t = 0


w_075 = -a * (np.exp(a * x) * np.sin(a * x + d_values[0] * y) + np.exp(a * y) * np.cos(a * x + d_valu

# Calculer v(x, y, z, t) pour d = 1


v_1 = -a * (np.exp(a * y) * np.sin(a * x + d_values[1] * y) + np.exp(a * x) * np.cos(a * y + d_values

# Calculer w(x, y, z, t) pour d = 1


w_1 = -a * (np.exp(a * x) * np.sin(a * x + d_values[1] * y) + np.exp(a * y) * np.cos(a * x + d_values

# Calculer u(x, y, z, t) pour d = 0.75 et t = 0


u_075 = -a * (np.exp(a * y) * np.sin(a * x + d_values[0] * y) + np.exp(a * x) * np.cos(a * y + d_valu

# Calculer u(x, y, z, t) pour d = 1


u_1 = -a * (np.exp(a * y) * np.sin(a * x + d_values[1] * y) + np.exp(a * x) * np.cos(a * y + d_values

# Création de la figure
fig, axs = plt.subplots(3, 2, figsize=(15, 18), subplot_kw={'projection': '3d'})

# Visualisation de u(x, y, z=0, t=0) pour d=0.75


axs[0, 0].plot_wireframe(x, y, u_075, color='blue', linewidth=0.5)
axs[0, 0].set_title(f'Composante de vitesse $u(x, y, z = 0, t = 0)$ pour $d = {d_values[0]}$')
axs[0, 0].set_xlabel('location x')
axs[0, 0].set_ylabel('location y')
axs[0, 0].set_zlabel('u(x, y)')

# Visualisation de u(x, y, z=0, t=0) pour d=1


axs[0, 1].plot_wireframe(x, y, u_1, color='red', linewidth=0.5)
axs[0, 1].set_title(f'Composante de vitesse $u(x, y, z = 0, t = 0)$ pour $d = {d_values[1]}$')
axs[0, 1].set_xlabel('location x')
axs[0, 1].set_ylabel('location y')
axs[0, 1].set_zlabel('u(x, y)')

# Visualisation de v(x, y, z=0, t=0) pour d=0.75


axs[1, 0].plot_wireframe(x, y, v_075, color='green', linewidth=0.5)
axs[1, 0].set_title(f'Composante de vitesse $v(x, y, z = 0, t = 0)$ pour $d = {d_values[0]}$')
axs[1, 0].set_xlabel('location x')
axs[1, 0].set_ylabel('location y')
axs[1, 0].set_zlabel('v(x, y)')

# Visualisation de v(x, y, z=0, t=0) pour d=1


axs[1, 1].plot_wireframe(x, y, v_1, color='orange', linewidth=0.5)
axs[1, 1].set_title(f'Composante de vitesse $v(x, y, z = 0, t = 0)$ pour $d = {d_values[1]}$')
axs[1, 1].set_xlabel('location x')
axs[1, 1].set_ylabel('location y')
axs[1, 1].set_zlabel('v(x, y)')

# Visualisation de w(x, y, z=0, t=0) pour d=0.75


axs[2, 0].plot_wireframe(x, y, w_075, color='purple', linewidth=0.5)
axs[2, 0].set_title(f'Composante de vitesse $w(x, y, z = 0, t = 0)$ pour $d = {d_values[0]}$')
axs[2, 0].set_xlabel('location x')
axs[2, 0].set_ylabel('location y')
axs[2, 0].set_zlabel('w(x, y)')

# Visualisation de w(x, y, z=0, t=0) pour d=1


axs[2, 1].plot_wireframe(x, y, w_1, color='pink', linewidth=0.5)
axs[2, 1].set_title(f'Composante de vitesse $w(x, y, z = 0, t = 0)$ pour $d = {d_values[1]}$')
axs[2, 1].set_xlabel('location x')
axs[2, 1].set_ylabel('location y')
axs[2, 1].set_zlabel('w(x, y)')

# Ajustement de l'espace entre les sous-graphes


plt.tight_layout()
# Afficher la figure
plt.show()

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy