The pVAE framework is trained on two datasets: a synthetic dataset of artificial porous microstructures and CT-scan images of volume elements from real open-cell foams. The encoder-decoder architecture of the VAE captures key microstructural features, mapping them into a compact and interpretable latent space for efficient structure-property exploration. The study provides a detailed analysis and interpretation of the latent space, demonstrating its role in structure-property mapping, interpolation, and inverse design. This approach facilitates the generation of new metamaterials with desired properties
-
Microstructures are represented in voxelized form (binary images).
-
The effective properties are:
-
Porosity
$n^F$ -
Intrinsic permeability tensor
$\mathbf{K}^{S}$
-
Porosity
The porosity can be calculated directly from the volume fraction of the pore phase in the binary image:
Estimating the intrinsic permeability tensor
- A mesh-based approach can be used from the 3D binary image.
- The Lattice Boltzmann Method (LBM) discretizes the pore space in velocity space and solves the Navier–Stokes equations in a stochastic sense with a collision operator.
- From the LBM, we obtain the microscopic velocity field
$\mathbf{u}(\mathbf{x})$ .
The homogenized velocity is computed by volume averaging:
Applying Darcy’s law, the intrinsic permeability tensor is obtained as:
where:
-
$\mu$ is the dynamic viscosity, -
$\nabla p$ is the macroscopic pressure gradient, -
$\mathbf{K}^S$ is the intrinsic permeability tensor.
Thus,
Reference:
[1] Nguyen Thien Phu, Uwe Navrath, Yousef Heider, Julaluk Carmai, Bernd Markert.
Investigating the impact of deformation on foam permeability through CT scans and the Lattice–Boltzmann method.
PAMM, 2023. [https://doi.org/10.1002/pamm.202300154]
To overcome the challenging and computationally expensive of numerical model, we applied a surrogate model that can be used to obtain the Intrinsic permeability faster. In particular, a CNN model is used to predict these values from the binary image instead of TPM-LBM. For more details, please check [2].
Reference:
[2] Yousef Heider, Fadi Aldakheel, Wolfgang Ehlers.
A multiscale CNN-based intrinsic permeability prediction in deformable porous media
Applied Sciences, 15(5):2589, 2025.
The Variational Autoencoder (VAE), introduced in 2013, is one of the most influential generative models.
It combines deep learning with probabilistic inference, enabling the mapping between high-dimensional data and a structured latent space. For more details, see [3].
Reference:
[3] Kingma & Welling (2013), Auto-Encoding Variational Bayes.
The Kullback–Leibler (KL) divergence measures how one probability distribution differs from another:
The first term is negative entropy (-H) and the second term is negative cross-entropy (-CE).
Jensen inequality:
$\mathbb{E}[f(x)] \geq f(\mathbb{E}(x)) \rightarrow \text{convex function}$ $\mathbb{E}[f(x)] \leq f(\mathbb{E}(x)) \rightarrow \text{concave function}$
The right side:
From that, we have:
The KL divergence must be equal or larger than zero.
Bayes theorem:
Direct computation of the posterior
Instead, VAE introduces a variational distribution
The true log-likelihood is:
Since this integral is intractable, we approximate it using variational inference, where the encoder learns
A VAE learns stochastic mapping between an observed
For any choice of inference model, including the choice of variational parameters
- The log marginal likelihood can be expressed as an expectation
- Since
$\log p_{\theta}(\mathbf{x})$ is constant w.r.t.$\mathbf{z}$ , we can rewrite
- Add and subtract
$\log q_\phi(\mathbf{z}|\mathbf{x})$
- The second term is the KL divergence
Thus:
We have that the KL divergence must be larger than zero, demonstrating before (Section 1 - KL divergence). Then,
Expanding the joint
The Evidence Lower Bound (ELBO) is maximized to train the VAE:
The ELBO has two competing terms:
-
Reconstruction term: encourages accurate data reconstruction via the decoder
$p_\theta(\mathbf{x}|\mathbf{z})$ -
Regularization term: forces
$q_\phi(\mathbf{z}|\mathbf{x})$ to be close to the prior$p(\mathbf{z})$
Thus, the VAE balances reconstruction quality with latent space regularization.
When the prior is chosen as
Given a dataset with i.i.d data (discete variable):
We have the Gaussian kernel here. So, the KL divergence can be expressed as:
With the prior as Standard normal,
For other kernel, we need to drive the KL divergence according to it.
© Prof. Dr.-Ing. habil. Fadi Aldakheel
Leibniz Universität Hannover
Faculty of Civil Engineering and Geodetic Science
Institut für Baumechanik und Numerische Mechanik (IBNM)
https://www.ibnm.uni-hannover.de
Coded by Phu Thien Nguyen with the help of Copilot 😃
Paper: Deep learning-aided inverse design of porous metamaterials
The authors are: Phu Thien Nguyen, Yousef Heider, Dennis Kochmann, Fadi Aldakheel
pVAE
├── src
│ ├── model.py # Defines the VAE architecture
│ ├── train.py # Contains the training loop for the VAE
│ ├── train_vae_tune.py # Contains the ray tune framework for hyperparameter tuning
│ ├── evaluate.py # Evaluates the performance of the trained VAE
│ ├── latent_extract.py # Extract the latent space
│ ├── interpolation.py # The sphearical interpolation (Slerp)
│ ├── inverse.py # Inverse process with target properties
│ └── utils
│ └── data_loader.py # Utility functions for loading and preprocessing data
├── data
│ ├── data3D200 # Directory containing 3D BMP images ($200^3$) (email me: [email protected])
│ ├── data3D200.csv # Effective Properties for 3D BMP images ($200^3$)
│ ├── syn-data.ipynb # Notebook for creating synthetic data ($100^3$), random pore position is uniform distribution
├── requirements.txt # Lists the required Python dependencies and data information
├── .gitignore # Specifies files to be ignored by Git
└── README.md # Documentation for the project
-
Clone the repository:
git clone <repository-url> cd CNN-pVAE-
-
Install the required dependencies:
pip install -r requirements.txt
-
Place your 3D BMP images in the
data3d
directory.
To train the VAE, run the following command:
python src/train.py
After training, you can evaluate the model using:
python src/evaluate.py
The pVAE consists of a variational autoencoder (VAE) and a regressor. The VAE includes an encoder that compresses input images into a latent space and a decoder that reconstructs images from this latent representation. In addition to minimizing the reconstruction loss and the Kullback-Leibler divergence, the latent space is used by a regressor to predict effective material properties directly from the encoded representations. This joint framework enables both image reconstruction and property prediction, facilitating structure-property mapping and inverse design. For more information and results, please check [4, 5].
Reference:
[4] P.T. Nguyen, Y. Heider, D. Kochmann, and F. Aldakheel, Deep learning-aided inverse design of porous metamaterials, CMAME, (2025)
[5] P.T. Nguyen, B.-e. Ayouch, Y. Heider, and F. Aldakheel, Impact of Dataset Size and Hyperparameters Tuning in a VAE for Structure-Property Mapping in Porous Metamaterials, PAMM, (2025)
This project is licensed under the MIT License. See the LICENSE file for more details.