当前位置 博文首页 > 粽子小黑的博客:[CVAE-GAN] An inverse design method for supe
Author: Jing WANG, Runze LI, Cheng HE et al. [PDF]
supercritical airfoil: 超临界机翼
Usually proceeds in two steps.
First, target distributions are generated to reflect the design goals. (be specified by an experienced aerodynamicist)
Second, optimization methods are applied to optimize the parameterized pressure distributions to obtain the target distributions.
GAN: generator competes against discriminator
CGAN: Conditional Generative Adversarial Networks
based on a vector set of conditional data
VAE: Variational AutoEncoder
encodes the input data to a probabilistic latent space
CVAE: Conditional Variational AutoEncoder
tuple(x,y) starting from the trailing edge.
only the y coordinates are involved in training
which can reduce the dimensionality of the input or output of the deep learning network to accelerate the training procedure
壁面马系数分布
This paper only use airfoils with single shock wave concluding the weak shock pressure distribution was evaluated as the best.
Features | Symbols | Implication |
---|---|---|
Suction peak | F s p F_{sp} Fsp? | The x x x??? and M a w Ma_{w} Maw??? values at the point with the highest wall Mach number. |
Start of the shock wave | F s w 0 F_{sw0} Fsw0? | The x x x and M a w Ma_{w} Maw? values at the start of the shock wave. |
End of the shock wave | F s w 1 F_{sw1} Fsw1? | The x x x and M a w Ma_{w} Maw? values at the end of the shock wave. |
Aft loading | F a l F_{al} Fal? | The x x x and M a w Ma_{w} Maw? values at the point with the maximum difference in the wall number between the upper and lower surfaces near the trailing edge. |
Maximum wall Mach number of lower surface | F l m F_{lm} Flm? | The x x x and M a w Ma_{w} Maw? values at the point with the maximum wall Mach number on the lower surface. |
compare the CVAE and CVAE-GAN
use WGAN instead of traditional GANs. improves the stability of learning
The VAE decoder and the GAN generator are collapsed into one by sharing their parameters and training them jointly.
Encoder network E E E: maps the data sample x x x to a latent representation z z z through a learned distribution P ( z ∣ x , c ) P(z|x,c) P(z∣x,c), where c c c is the given condition that the data satisfy.
Generative network G G G: generates the x ′ x^{'} x′ under the given latent vector z z z and conditions c c c by sampling from a learned distribution P ( x ∣ z , c ) P(x|z,c) P(x∣z,c).
Loss(training stage): L generator? = MSE ? ( x ′ , x ) + KL ? ( q ( z ∣ x , c ) ∣ ∣ p ( z ∣ x ) ) + D ( G ( z ) ) L_{\text {generator }}=\operatorname{MSE}\left(x^{\prime}, x\right)+\operatorname{KL}(q(z \mid x, c)|| p(z \mid x))+D(G(z)) Lgenerator??=MSE(x′,x)+KL(q(z∣x,c)∣∣p(z∣x))+D(G(z))?
Discriminative network D D D: This network is the same as that in the WGAN.
Loss(final loss): L discriminator? = D ( x ) ? D ( G ( z ) ) L_{\text {discriminator }}=D(x)-D(G(z)) Ldiscriminator??=D(