Skip to content
dumakey edited this page Feb 5, 2024 · 35 revisions

DLAG (Deep Learning Airfoil Generator) is a deep learning based tool that is able to airfoil contours using a variational autoencoder architecture.

Architecture

An encoder is a particular neural network architecture that aims to generate samples, among the deep generative models that are able to generate content (for example, GANNs). These are based on a bottleneck architecture in which there is a compression stage (encoder), in which the basic features of the input are extracted, and a reconstruction stage (decoder), in which the compressed features extracted before are used to reconstruct the input. These compressed information form what is called the latent space. It is a set of variables (that may or not have physical information) that gather the essential components/features of the input for it to be (re)generated.

The downside of such network is that the latent space is not regularized, which means that a sample taken from it to reconstruct the input may produce non-sense outputs. This is due to the fact that the latent space generated in classic autoencoders is not continuous: it is made of isolated features generated during the training process among which the autoencoder is not able to interpolate to produce a coherent input for the decoder. Variational autoencoders (VAE) prevent this discontinuity (unregularized latent space) from occurring by generating a latent space based on a normal distribution; this forces the sample (from the latent space) to be structured and coherent with the features extracted during the compression (encoding) stage.

Airfoil treatment

An airfoil is a closed contour that must be treated in a specific manner, with the aim to obtain a smooth contour that satisfies the design requirements imposed. As opposed to the project CGAE, in which the aim was to generate random contours with the aid of a VAE, in this project the treatment of the contour of an airfoil has been different, since one of the key things of DLAG is that the architecture has to be able to produce contours at the same time that to satisfy certain design requirements. The upper and lower contours can be expressed as a function of the camber and thickness contours:

This way, the design process can be split in the design of the camber contour, and the design of the thickness contour. This is the strategy followed in this project in order to simplify the treatment of a closed contour and to avoid discontinuities in the contour that may have arisen had the contour not be split in these two contributions.

Inputs & outputs

The input of the NN is composed of two parts:

  1. Black & white image of the contour to be digested by the network, in an unfolded array structure.
  2. One-hot array containing the design parameters to satisfy. The size of this array is the number of design parameters that are available for the moment:
  • Leading edge radius
  • Trailing edge angle (the angle between the upper and lower side at the joint point)
  • Maximum height of the curve, Zmax
  • Chordwise position of the maximum height of the curve, Xzmax
  • Minimum height of the curve, Zmin
  • Chordwise position of the minimum height of the curve, Xzmin
  • Height of the leading edge, Zle
  • Height of the trailing edge, Zte
  • Slope values (dz/dx) at different chordwise positions (x), in an attempt to control the curvature of the curve The user can activate any of these parameters, either only one or more than one, for each training session.

Thus, a sample of the dataset that will be fed to the network is an array made of of "N" design parameters (as many as the user decide) and (Nh x Nw) pixel intensity value, corresponding to the size of the picture (being Nh the number of pixels in the height direction, and Nw the number of pixels in the width direction). As for the output of the autoencoder, it will generate an unfolded array of the generated picture, with the same size as the input (Nh x Nw).

As an example, the following pictures show a decomposition of the airfoil 2032c in its camber and thickness contours:

Contour of the airfoil 2032c.

Training

As has been explained in the previous section, the training process is composed of two:

  1. One training session to train a VAE to recognize and generate camber contours
  2. A second training session to train another VAE to recognize and generate thickness contours

The most representative training parameters to set up the case are the following:

  • Design parameters. A list of the available ones is stated in the previous section
  • Latent dimension
  • Hidden layers dimension
  • Learning rate
  • Number of epochs
  • Regularization parameters: L1, L2, dropout
  • Batch size
  • Activation functions (Relu, LeakyRelu, Elu, Swish, Tanh, Sigmoid)
  • Training size (over the whole dataset provided)
  • Possibility to add additional datasets, on top of the main one

Dataset augmentation

Due to the lack of a huge dataset with which to comprehensively train the network, several techniques of data augmentation have been implemented, with the aid of OpenCV. The operations available are the following:

  • Rotation
  • Filtering
  • Flipping
  • Zoom
  • Image resizing

However, in order to generate realistic new samples, the only operation that has been used is the flipping.

Example over a representative dataset

An original dataset made of 679 samples has been used to train the model. In order to cross-validate while training, the dataset was split in three subsets:

  • Training dataset, used entirely to train the model. Size: 75% of the original's dataset size.
  • Cross-validation dataset, to assess the performance (overfitting/underfitting) of the model while training. Size: 12.5% of the original's dataset size.
  • Test dataset, to make predictions once the model is trained. Size: 12.5% of the original's dataset size.

Results

Down below there are some predicted contours over the set of test samples (unseen contours for the model).

As for the capability of the model to generate new samples (to sample from the latent space), some generated contours are shown hereafter. Many cases of study have been analyzed, varying each one in the design parameter selected to train the VAE. In this results summary, two cases have been considered:

  • Case 1, in which the design parameter is the position of the maximum height of the contour, Xzmax
  • Case 2, in which the design parameters are the slope values at the chordwise position of the contour, x=0.2 and x=0.7. The design process is to find a contour that satisfies that at those x-locations, the slope of the contour (either camber of thickness) is as much close as possible to the ones specified (this value is not provided here, as it is a degree of freedom for the user).

Case 1: XZMAX

In this case the requirement to satisfy is the chordwise position of the maximum height to be placed at x=0.5

Case 2: DZDX

In this case the requirement is to satisfy the values imposed on the slope of the airfoil curve at the chordwise position x=0.2.