Black-Box Optimization of the Airfoil Geometry with Fluid Flow Simulation¶
Various types of airfoils are used in aircraft, transportation equipment, and various energy devices, and the cross-sectional shape (airfoil shape) has a significant impact on the performance and efficiency of these industrial products.
Generally, airfoil optimization can be performed by various methods (trial and error) to maximize lift (or downforce) and minimize drag. However, evaluating the airfoil profile (computing/measuring lift and drag forces) requires experiments or fluid simulations, and the evaluation cost (in terms of time and money) per evaluation is considered relatively high. Therefore, it is difficult to optimize an airfoil profile based on multiple parameters in a full search manner.
In this tutorial, black-box optimization based on a factorization machine and simulated annealing will be used to optimize an airfoil profile with a limited number of evaluations. In the optimization process, the ratio of lift $F_L$ and drag $F_D$ acting on an airfoil situated in a flow field (lift-drag ratio) is maximized. The lift-drag ratio $r_{LD}$ is defined as:
$$ r_{LD} = F_L/F_D $$
There are several ways to construct the airfoil shape. Here, the airfoil shape is constructed by applying a mapping called the Jukowski transformation to a circle in the complex plane. The negative value of the lift-drag ratio ($-r_{LD}$) is used as the objective function, and fluid flow simulation is used to obtain (evaluate) the value of the objective function.
In addition, optimization is performed for non-binary decision variables, which in this case take integers as input variables, which are then used to express discrete real numbers. To describe the objective function and the acquisition function in QUBO formulas, it is necessary to convert the non-binary variables into binary decision variables. For this purpose, this example program uses one-hot encoding and also introduces the necessary encoder/decoder implementation.
For basic knowledge of black-box optimization and FMQA, see "Black-Box Optimization with Quantum Annealing/Ising Machines". For the other practical application case of black-box optimization, see "Black-Box Optimization of Operating Condition in a Chemical Reactor".
This notebook consists of the following sections.
- 1. Airfoil definition and fluid flow simulation
- 2. FMQA program implementation (non-binary decision variables)
- 3. Search for the optimal airfoil profile
$^*$ The execution time is limited to about 20 minutes in this online demo and tutorial environment. If necessary, please run the program with a smaller number of optimization cycles $N$ (see below) or copy this example program to your own environment. In this case, you should also download the following libraries related to airfoil generation and fluid flow simulation as well, save them in the following directory structure, and execute the example code.
├ fmqa_3_aerofoil.ipynb
(this sample program)
└ utils/
├ __init__.py
(blank)
├ joukowski.py
└ lbm.py
1. Airfoil definition and fluid flow simulation¶
The following airfoil shape generator and fluid analysis classes are provided in this example program.
- Airfoil generator class:
joukowski.WingGenerator
- Fluid flow simulator class:
lbm.Solver
In this section, we will use the prepared classes to simulate fluid flow around an airfoil and explain how to obtain the lift-drag ratio $r_{L/D}$, which is directly related to the objective function in this black-box optimization example.
1.1. Airfoil generator class¶
This example program uses the Joukowski transform as the method for constructing the airfoil. The Joukowski transform is,
$$ z(x,y) = \zeta(\xi, \eta) + \frac{c^2}{\zeta(\xi, \eta)} $$
and represents a map between two complex planes in $\zeta\rightarrow z$. The airfoil profile in this example program is constructed by the Jukowski transform, based on a circle in $\zeta$ whose center is $(\xi_0, \eta_0)$ and passes through the point $(c,0)$, which is then rotated by $\alpha$ in the $z$ plane. The $\xi_0$, $\eta_0$, and $\alpha$ are parameters, corresponding to the wing thickness, warping, and angle of attack, respectively.
Let us generate an airfoil using joukowski.WingGenerator
, which is an airfoil generator
class. The parameters $(\xi_0, \eta_0, \alpha)$ for the mapping are real numbers, but for the sake of
discrete optimization, we will choose each parameter from a finite number of candidates. Such an
operation
is called integer indexing.
from utils import joukowski
# Instantiate the airfoil generator class, joukowski.WingGenerator
wing_generator = joukowski.WingGenerator()
# Display integer index table of airfoil generator parameters
wing_generator.show_index_table()
With wing_generator.show_index_table()
, you can check the correspondence between the
integer
indices used for airfoil generation and the parameter values $(\xi_0, \eta_0, \alpha)$. For example,
$\xi_0=5.50$, $\eta_0=3.75$, $\alpha=20.00$ deg can be specified as $\xi_0[10]=5.50$,
$\eta_0[15]=3.75$,
$\alpha[20]=20.00$ deg using integer indices. Using these parameters, the airfoil profile will be
generated and displayed as follows.
Let us construct an airfoil using different integer index values and see the mapping parameters and the airfoil profile.
# Perform airfoil generation (mapping transformation). xi0, eta0, and alpha are specified by integer indices. See the output of wing_generator.show_index_table() for the index table.
wing = wing_generator.generate_wing(10, 15, 20)
# Show the constructed airfoil profile
wing.draw()
# Show the mapping parameters for transformation
wing_generator.show_parameters("My wing")
1.2. Fluid flow simulator class¶
In this tutorial, we will use a fluid flow simulator to evaluate airfoil profiles. Various fluid
simulation methods have been developed, but in this tutorial, we will use a two-dimensional fluid flow
simulation method based on the Lattice Boltzmann method due to its computational cost and ease of
setting
boundary conditions. A Lattice Boltzmann solver is available from the fluid flow simulator class
lbm.Solver
.
As an example, the flow around the airfoil constructed above is simulated as follows. Note that under the given conditions, one simulation will take about 20 seconds of computation time. After the simulation, the flow velocity vector (black arrow) and the fluid force acting on the airfoil (combined lift and drag force; red arrow) are shown in the visualization. The color indicates the vorticity.
This time, we aim to maximize the ratio of the lift force $F_L$ and drag force $F_D$ (lift-drag ratio) $r_{LD} = F_L/F_D$, so we consider the negative value of the lift-drag ratio $-r_{LD}$ as the objective function and perform the optimization to minimize it. The evaluated value of $-r_{LD}$ will be shown below at the end of the simulation.
from utils import lbm
# Simulation grid size (nx: horizontal, ny: vertical)
nx, ny = 400, 160
# Instantiate the lbm.Solver class.
solver = lbm.Solver(nx, ny)
# Add the airfoil, wing, constructed above as a boundary condition (profile model)
solver.add_model(wing)
# Run 3000-time-step simulation
solver.integrate(steps=3000)
# Calculate the negative value of the lift-drag ratio, the objective function, from the simulation results (for this FMQA, minimize this value)
cost = -solver.fy_average / solver.fx_average
# Print the evaluated value of the objective function
print(f"{cost=:.2f}")
2. FMQA program implementation (non-binary decision variables)¶
In this section, we will implement the black-box optimization (FMQA). Much of the implementation of the FMQA part can be found in "Black-Box Optimization with Quantum Annealing/Ising Machines", so please refer to that document for details.
Please note that the main difference is that the present black-box optimization considers integer vectors as input values to the objective function instead of binary variables. For the handling of integer input values in FMQA, see "One-hot encoder/decoder".
2.1. Random seed initialization¶
We define a function seed_everything()
to initialize random seed values to ensure that
the
machine learning results do not change with each run.
import os
import torch
import numpy as np
def seed_everything(seed=0):
os.environ["PYTHONHASHSEED"] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
2.2. Configuration of Amplify client¶
Here, we create an Amplify client and set the necessary parameters. In the following, we set the timeout for a single search by the Ising machine to 1 second.
from amplify.client import FixstarsClient
client = FixstarsClient()
client.parameters.timeout = 1000 # Timeout 1s
# client.token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # If you use Amplify in a local environment or Google Colaboratory, enter your Amplify API token.
2.3. Implementing FM with PyTorch¶
This example code implements FM with PyTorch. In the TorchFM
class, we define the
acquisition function $g(x)$ as a machine learning model. Each term in $g(x)$ directly corresponds to
out_lin
, out_1
, out_2
, and out_inter
in the
TorchFM
class, as in the following equation.
$$ \begin{aligned} g(\boldsymbol{x} | \boldsymbol{w}, \boldsymbol{v}) &= \underset{\color{red}{\mathtt{out\_lin}}}{\underline{ w_0 + \sum_{i=1}^n w_i x_i} } + \underset{\color{red}{\mathtt{out\_inter}}}{\underline{\frac{1}{2} \left[\underset{\color{red}{\mathtt{out\_1}}}{\underline{ \sum_{f=1}^k\left(\sum_{i=1}^n v_{i f} x_i\right)^2 }} - \underset{\color{red}{\mathtt{out\_2}}}{\underline{ \sum_{f=1}^k\sum_{i=1}^n v_{i f}^2 x_i^2 }} \right] }} \end{aligned} $$
import torch.nn as nn
class TorchFM(nn.Module):
def __init__(self, d: int, k: int):
super().__init__()
self.V = nn.Parameter(torch.randn(d, k), requires_grad=True)
self.lin = nn.Linear(
d, 1
) # The first and second terms on the right-hand side are fully connected network
def forward(self, x):
out_1 = torch.matmul(x, self.V).pow(2).sum(1, keepdim=True)
out_2 = torch.matmul(x.pow(2), self.V.pow(2)).sum(1, keepdim=True)
out_inter = 0.5 * (out_1 - out_2)
out_lin = self.lin(x)
out = out_inter + out_lin
return out
Next, a function train()
is defined to train the FM based on the training data sets. As
in
general machine learning methods, this function divides the data sets into training data and
validation
data, then optimizes the FM parameters using the training data, and validates the model during
training
using the validation data. The train()
function returns the model with the highest
prediction
accuracy for the validation data.
from torch.utils.data import TensorDataset, DataLoader
from sklearn.model_selection import train_test_split
import copy
def train(
X,
y,
model_class=None,
model_params=None,
batch_size=1024,
epochs=3000,
criterion=None,
optimizer_class=None,
opt_params=None,
lr_sche_class=None,
lr_sche_params=None,
):
X_tensor, y_tensor = (
torch.from_numpy(X).float(),
torch.from_numpy(y).float(),
)
indices = np.array(range(X.shape[0]))
indices_train, indices_valid = train_test_split(
indices, test_size=0.2, random_state=42
)
train_set = TensorDataset(X_tensor[indices_train], y_tensor[indices_train])
valid_set = TensorDataset(X_tensor[indices_valid], y_tensor[indices_valid])
loaders = {
"train": DataLoader(train_set, batch_size=batch_size, shuffle=True),
"valid": DataLoader(valid_set, batch_size=batch_size, shuffle=False),
}
model = model_class(**model_params)
best_model_wts = copy.deepcopy(model.state_dict())
optimizer = optimizer_class(model.parameters(), **opt_params)
if lr_sche_class is not None:
scheduler = lr_sche_class(optimizer, **lr_sche_params)
best_score = 1e18
for epoch in range(epochs):
losses = {"train": 0.0, "valid": 0.0}
for phase in ["train", "valid"]:
if phase == "train":
model.train()
else:
model.eval()
for batch_x, batch_y in loaders[phase]:
optimizer.zero_grad()
out = model(batch_x).T[0]
loss = criterion(out, batch_y)
losses[phase] += loss.item() * batch_x.size(0)
with torch.set_grad_enabled(phase == "train"):
if phase == "train":
loss.backward()
optimizer.step()
losses[phase] /= len(loaders[phase].dataset)
with torch.no_grad():
model.eval()
if best_score > losses["valid"]:
best_model_wts = copy.deepcopy(model.state_dict())
best_score = losses["valid"]
if lr_sche_class is not None:
scheduler.step()
with torch.no_grad():
model.load_state_dict(best_model_wts)
model.eval()
return model
2.4. Construction of initial training data¶
The gen_training_data
function evaluates the objective function $f(\boldsymbol{x})$
against
the input value $\boldsymbol{x}$ to produce $N_0$ input-output pairs (initial training data). The
input
value x can be determined in a variety of ways, such as by using a random number or a value suitable
for
machine learning based on prior knowledge. You can also build up the training data from the results of
previous experiments or simulations.
To handle non-binary integer input values, we use the input_generator
function, which
generates an integer input vector with random numbers and converts it to a binary vector based on
one-hot
encoding. The function input_generator
is defined in the one-hot encoder/decoder class
EncDecOneHot
, which will be introduced later.
def gen_training_data_flow_onehot(D: int, N0: int, true_func, input_generator):
X = np.zeros((N0, D))
y = np.zeros(N0)
print("")
print("###################################################")
print(" Making initial training data")
print("###################################################")
for i in range(N0):
print(f"Generating training data set #{i}.")
# Randomly determine input vector consisting of (0 or 1)
x = input_generator()
# If the same input vector has already exist, regenerate the input values
is_identical = True
while is_identical:
is_identical = False
for j in range(i + 1):
if np.all(x == X[j, :]):
x = input_generator()
is_identical = True
break
# Evaluate input value x with objective function f(x) and copy input-output pair (x,f(x)) to teacher data
X[i, :] = x
y[i] = true_func(x)
print(f"------------------------")
return X, y
2.5. Execution class for FMQA cycle¶
FMQA.cycle()
executes an FMQA cycle that is run for $N−N_0$ times using the
pre-prepared
initial training data. FMQA.step()
is a function that executes only one FMQA cycle, and
is
called $N−N_0$ times by FMQA.cycle()
.
For FMQA with non-binary variables, each variable must be encoded as a binary variable. The
FMQA_NB
class assumes one-hot encoding. The encoder and decoder of non-binary variables
are
specified at the instance of this class as encoder_decoder.encoder
and
encoder_decoder.decoder
, respectively. Also, nbins_array
is a list of the
total
(variable) number of bins used to encode each non-binary variable.
from amplify import Solver, BinarySymbolGenerator, sum_poly
from amplify.constraint import one_hot
import matplotlib.pyplot as plt
import sys
# FMQA for non-binary inputs
class FMQA_NB:
def __init__(self, D, N, N0, k, true_func, solver, encoder_decoder) -> None:
assert N0 < N
self.D = D
self.N = N
self.N0 = N0
self.k = k
self.true_func = true_func
self.solver = solver
self.nbins_array = encoder_decoder.nbins_array
self.encoder = encoder_decoder.encoder
self.decoder = encoder_decoder.decoder
self.y = None
# A member function that repeatedly performs (N-N0)x FMQA based on the training data with adding new training data
def cycle(self, X, y, log=False) -> np.ndarray:
# Weight for the one-hot constraints
constraint_weight = max([1, int(np.abs(y).max() + 0.5)])
print("")
print("###################################################")
print(f" Starting FMQA cycles... {constraint_weight=}")
print("###################################################")
pred_x = X[0]
pred_y = 1e18
for i in range(self.N - self.N0):
print(f"FMQA Cycle #{i} ")
try:
x_hat = self.step(X, y, constraint_weight)
except RuntimeError:
sys.exit(f"Unknown error, i = {i}")
# If an input value identical to the found x_hat already exists in the current training data set, a neighboring value is used as the new x_hat.
is_identical = True
while is_identical:
is_identical = False
for j in range(i + self.N0):
if np.all(x_hat == X[j, :]):
# Decode binary vector to integer vector and copy to inputss
inputs = self.decoder(x_hat)
# Get the surrounding values (inputs is an integer vector)
inputs += np.random.randint(-1, 2, len(inputs))
for i_inp in range(len(inputs)):
if inputs[i_inp] < 0:
inputs[i_inp] = 0
elif inputs[i_inp] > self.nbins_array[i_inp] - 1:
inputs[i_inp] = self.nbins_array[i_inp] - 1
# Encode from integer vector to binary vector and copy to x_hat
x_hat = self.encoder(inputs)
if log:
print(f"{i=}, Identical x is found, {x_hat=}")
is_identical = True
break
# Evaluate objective function f() with x_hat
y_hat = self.true_func(x_hat)
# Add the input-output pair [x_hat, y_hat] to the training data set
X = np.vstack((X, x_hat))
y = np.append(y, y_hat)
# Copy the input-output pair to [pred_x, pred_y] when the evaluated value of the objective function updates the minimum value
if pred_y > y_hat:
pred_y = y_hat
pred_x = x_hat
print(f"variable updated, {pred_y=}")
else:
print("")
print(f"------------------------")
self.y = y
return pred_x
# Member function to perform one FMQA cycle
def step(self, X, y, constraint_weight) -> np.ndarray:
# Train FM
model = train(
X,
y,
model_class=TorchFM,
model_params={"d": self.D, "k": self.k},
batch_size=8,
epochs=2000,
criterion=nn.MSELoss(),
optimizer_class=torch.optim.AdamW,
# Set scheduler to reduce learning rate with number of epochs
opt_params={"lr": 0.5},
lr_sche_class=torch.optim.lr_scheduler.StepLR,
lr_sche_params={"step_size": 50, "gamma": 0.9},
)
# Extract FM parameters from the trained FM model
v, w, w0 = list(model.parameters())
v = v.detach().numpy()
w = w.detach().numpy()[0]
w0 = w0.detach().numpy()[0]
# Solve a QUBO problem using a quantum annealing or Ising machine
gen = BinarySymbolGenerator() # Declare a variable generator, BinaryPoly
q = gen.array(self.D) # Generate decision variables using BinaryPoly
cost = self.__FM_as_QUBO(
q, w0, w, v
) # Define FM as a QUBO equation from FM parameters
# Construct one-hot constraints for non-binary variables
constraints = 0
ista = 0
iend = 0
for i in range(len(self.nbins_array)):
iend += self.nbins_array[i]
constraints += one_hot(q[ista:iend])
ista = iend
# Add up the objective function and constraints, and pass it to the Fixstars Amplify solver
model = cost + constraint_weight * constraints
result = self.solver.solve(model)
if len(result.solutions) == 0:
raise RuntimeError("No solution was found.")
values = result.solutions[0].values
q_values = q.decode(values)
return q_values
# A function that defines FM as a QUBO equation from FM parameters. As with the previously defined TorchFM class, the formula is written as per the acquisition function form of g(x).
def __FM_as_QUBO(self, x, w0, w, v):
lin = w0 + (x.T @ w)
D = w.shape[0]
out_1 = sum_poly(self.k, lambda i: sum_poly(D, lambda j: x[j] * v[j, i]) ** 2)
# Note that x[j] = x[j]^2 since x[j] is a binary variable in the following equation.
out_2 = sum_poly(
self.k, lambda i: sum_poly(D, lambda j: x[j] * v[j, i] * v[j, i])
)
return lin + (out_1 - out_2) / 2
"""The sum_poly used in __FM_as_QUBO above is inefficient in terms of computation speed and
memory. In the case of FM, where the interaction terms of the decision variables are generally
nonzero, the following implementation using BinaryMatrix is more efficient. Here, the quadratic
terms in BinaryMatrix correspond to the non-diagonal terms represented by the upper triangular
matrix, so x(1/2) for the quadratic terms in the FM formula is unnecessary. Also, although x is
taken as an argument just to match the function signature with __FM_as_QUBO above (implementation
using sum_poly), it is not needed in this implementation using BinaryMatrix.
def __FM_as_QUBO(self, x, w0, w, v):
out_1_matrix = v @ v.T
out_2_matrix = np.diag((v * v).sum(axis=1))
matrix = BinaryMatrix(out_1_matrix - out_2_matrix + np.diag(w))
# Do not forget to put the constant term w0 in the second argument of BinaryQuadraticModel.
model = BinaryQuadraticModel(matrix, w0)
return model
"""
# A function to plot the history of i-th objective function evaluations performed within the initial training data construction (blue) and during FMQA cycles (red).
def plot_history(self):
assert self.y is not None
fig = plt.figure(figsize=(6, 4))
plt.plot(
[i for i in range(self.N0)],
self.y[: self.N0],
marker="o",
linestyle="-",
color="b",
) # Objective function evaluation values at the time of initial training data generation (random process)
plt.plot(
[i for i in range(self.N0, self.N)],
self.y[self.N0 :],
marker="o",
linestyle="-",
color="r",
) # Objective function evaluation values during the FMQA cycles (FMQA cycle process)
plt.xlabel("i-th evaluation of f(x)", fontsize=18)
plt.ylabel("f(x)", fontsize=18)
plt.tick_params(labelsize=18)
return fig
2.6. One-hot encoder/decoder¶
When using the generate_wing
method of the airfoil generator class, an integer index
(e.g.
generate_wing(10, 15, 20)
) is required as input. On the other hand, the input to QUBO
must be
a binary variable, so encoding from integer indices to binary variables is required. In this tutorial,
we
will use a one-hot encoding. For example, in one-hot encoding with a 4-bit representation of an
integer
that can take values from 1 to 4,
- 1 is encoded to
[1, 0, 0, 0]
- 2 is encoded to
[0, 1, 0, 0]
- 3 is encoded to
[0, 0, 1, 0]
- 4 is encoded to
[0, 0, 0, 1]
The following EncDecOneHot
class defines functions for converting integer index to
binary
variable and vice versa (encoder()
, decoder()
) and functions for generating
an
integer index input vector with random numbers and converting it to a binary variable matrix
gen_random_input()
.
class EncDecOneHot:
def __init__(self, D: int, nbins_array):
self.D = D
self.nbins_array = nbins_array
def gen_random_input(self):
ista = 0
iend = 0
x = np.zeros(self.D, int)
for i in range(len(self.nbins_array)):
iend += self.nbins_array[i]
idx = np.random.randint(ista, iend, 1)[0]
x[idx] = 1
ista = iend
return x
def encoder(self, inputs):
if len(inputs) != len(self.nbins_array):
raise RuntimeError("inputs should be the same length as nbins_array!")
x = np.zeros(self.D, int)
i_offset = 0
for i in range(len(inputs)):
x[inputs[i] + i_offset] = 1
i_offset += self.nbins_array[i]
return x
def decoder(self, x):
bit_locs = np.where(x == 1)[0]
if len(bit_locs) != len(self.nbins_array):
raise RuntimeError("bit_locs should be the same length as nbins_array!")
i_offset = 0
for i in range(1, len(bit_locs)):
i_offset += self.nbins_array[i - 1]
bit_locs[i] -= i_offset
return bit_locs
3. Search for the optimal airfoil profile¶
3.1. FMQA execution example¶
Now, with the fluid flow simulator introduced in Section 1 as the objective function, the optimization to minimize the negative value of the lift-drag ratio is performed by FMQA implemented in Section 2.
In the following, the number of times $N$ the objective function can be evaluated is set to 40, of which $N_0=20$ is the number of evaluations to generate the initial training data. Thus, in the example below, $N-N_0=20$ FMQA cycles (machine learning, search for the optimal solution with annealing, and evaluation of the objective function) are performed. Note that with this setup, it takes approximately 20 minutes to complete all FMQA cycles.
An example output is shown in "3.3. Example output from this FMQA sample program".
# Initialize random seed value
seed_everything()
# Instantiate airfoil generator class, joukowski.WingGenerator
wing_generator = joukowski.WingGenerator()
# Display integer index table of airfoil generator parameters
wing_generator.show_index_table()
# One-hot encoding bin size for each variable
nbins_array = np.array(
[wing_generator.nbins_xi0, wing_generator.nbins_eta0, wing_generator.nbins_alpha]
)
# Overall size of binary variables (= size of binary decision variables handled by QUBO)
D = wing_generator.nbins_xi0 + wing_generator.nbins_eta0 + wing_generator.nbins_alpha
# Instantiate one-hot encoder-decoder class (EncDecOneHot)
enc_dec_one_hot = EncDecOneHot(D, nbins_array)
# Counter for the number of objective function evaluations
n_eval = 0
# Objective function (construct airfoil → run simulation → compute objective function value)
def obj_func(x):
global n_eval
# Decode binary variables to integers (mapping parameter indices)
i_xi0, i_eta0, i_alpha = enc_dec_one_hot.decoder(x)
# Perform airfoil generation (mapping transformation). xi0, eta0, alpha are integer indices.
# See the output of wing_generator.show_index_table() for table of indices
wing = wing_generator.generate_wing(i_xi0, i_eta0, i_alpha)
# Show the mapping parameters for the transformation
wing_generator.show_parameters("My wing")
# Simulation grid size (nx: horizontal, ny: vertical)
nx, ny = 400, 160
# Instantiate the lbm.Solver class.
solver = lbm.Solver(nx, ny)
# Add the airfoil, wing, constructed above as a boundary condition (airfoil model)
solver.add_model(wing)
# Run 5000-time-step simulation
solver.integrate(steps=5000)
# Negative value of the lift-drag ratio
cost = -solver.fy_average / solver.fx_average
# If the fluid simulation diverges
if np.isnan(cost):
cost = 0
print(f"#{n_eval} evaluation finished. cost:{cost:.2f}")
n_eval += 1
return cost
N = 40 # Number of times the objective function can be evaluated
N0 = 20 # Number of samples of initial training data
k = 20 # Dimension of vector in FM (hyperparameters)
# client: Amplify client created earlier
solver = Solver(client)
# Generate initial training data
X, y = gen_training_data_flow_onehot(D, N0, obj_func, enc_dec_one_hot.gen_random_input)
# Instantiate FMQA
fmqa_solver = FMQA_NB(D, N, N0, k, obj_func, solver, enc_dec_one_hot)
# Perform FMQA cycle
pred_x = fmqa_solver.cycle(X, y)
# Output optimization results
print("###################################################")
print(" Optimal input values and corresponding output")
print("###################################################")
print("pred x:", pred_x)
print("pred value:", obj_func(pred_x))
3.2. Transition of objective function values during the optimization process¶
The following line plots the evolution of the objective function values during the optimization process. The initial $N_0$ objective function values (blue line) are obtained from randomly generated input values during the initial training data generation. The following red line shows the objective function values during the $N−N_0$ FMQA optimization cycles. The blue and red lines show how the smallest objective function value is successively updated from the currently optimal input value (red line) obtained by the FMQA optimization cycle.
Here, input values $\hat{x}$ can sometimes be obtained that produce relatively large objective function values, even during the FMQA cycle. There are several reasons for this, but the main ones are
- If the current training data contains the same input value as the newly searched input value, the
neighboring input value $\hat{x}$ is ramdomly searched and the objective function is evaluated (see
FMQA_NB.cycle
), - The accuracy of the acquisition function $g(\boldsymbol{x})$ at a certain FMQA cycle is not high enough,
In any case, we were able to efficiently perform airfoil optimization by black-box optimization!
fig = fmqa_solver.plot_history()
3.3. Example output from this FMQA sample program¶
In general, due to the principle of the heuristic algorithm used in FixstarsClient
, the
solutions obtained are not completely reproducible, but the following is a typical standard output and
image output obtained when running this example code. The values obtained may vary.
-
Without changing the conditions in "FMQA execution example", the following standard output will be output sequentially output as the FMQA cycle progresses.
-
First, the airfoil profiles constructed for the input values based on random numbers generated during the initial training data generation and their fluid flow simulation results are sequentially output ($N_0$ times).
[Index table] 0 1 2 3 4 5 6 7 8 9 10 11 xi0[i] 1.0 1.45 1.9 2.35 2.8 3.25 3.7 4.15 4.6 5.05 5.5 5.95 \ eta0[i] 0.0 0.25 0.5 0.75 1.0 1.25 1.5 1.75 2.0 2.25 2.5 2.75 alpha[i] 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12 13 14 15 16 17 18 19 20 21 22 xi0[i] 6.4 6.85 7.3 7.75 8.2 8.65 9.1 9.55 - - - \ eta0[i] 3.0 3.25 3.5 3.75 4.0 4.25 4.5 4.75 5.0 5.25 5.5 alpha[i] 12.0 13.0 14.0 15.0 16.0 17.0 18.0 19.0 20.0 21.0 22.0 23 24 25 26 27 28 29 30 31 32 33 xi0[i] - - - - - - - - - - - \ eta0[i] 5.75 6.0 6.25 6.5 6.75 7.0 7.25 7.5 7.75 8.0 8.25 alpha[i] 23.0 24.0 25.0 26.0 27.0 28.0 29.0 30.0 31.0 32.0 33.0 34 35 36 37 38 39 xi0[i] - - - - - - eta0[i] 8.5 8.75 9.0 9.25 9.5 9.75 alpha[i] 34.0 35.0 36.0 37.0 38.0 39.0 ################################################### Making initial training data ################################################### Generating training data set #0. My wing: xi0[12]=6.40, eta0[0]=0.00, alpha[3]=3.00deg
#0 evaluation finished. cost:-0.53 ------------------------ Generating training data set #1. My wing: xi0[3]=2.35, eta0[39]=9.75, alpha[9]=9.00deg
#1 evaluation finished. cost:-1.86 ------------------------
・
・
・ -
After the initial training data construction is completed, an $N-N_0$ FMQA cycle begins. During the cycle, the following output is sequentially displayed for each FMQA attempt.
################################################### Starting FMQA cycles... constraint_weight=2 ################################################### FMQA Cycle #0 My wing: xi0[6]=3.70, eta0[23]=5.75, alpha[24]=24.00deg
#20 evaluation finished. cost:-2.09 variable updated, pred_y=-2.0937129113571267 ------------------------ FMQA Cycle #1 My wing: xi0[3]=2.35, eta0[23]=5.75, alpha[24]=24.00deg
#21 evaluation finished. cost:-1.95 ------------------------
・
・
・ -
After $N-N_0$ FMQA cycles, the simulation results of a flow around the airfoil and the objective function values as well as the obtained optimal binary input vectors are shown as follows.
################################################### Optimal input values and corresponding output ################################################### pred x: [0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] My wing: xi0[3]=2.35, eta0[39]=9.75, alpha[15]=15.00deg
#40 evaluation finished. cost:-2.74 pred value: -2.735594392729547
-
-
The output image of
fmqa_reactor.plot_history()
described in "3.2. Transition of objective function values during the optimization process shows how the minimum negative values of the lift-drag ratio are updated one after the other. The lift-drag ratio of the optimal airfoil obtained by the black-box optimization is 2.74, which is 17% higher than the maximum lift-drag ratio (2.35) of the airfoil obtained by the random search (initial training data construction) process.
3.4. Commentary¶
If this example program is executed as is, the resulting optimal lift-drag ratio $r_{LD}$ is approximately 2.7. The lift-drag ratio of a typical aircraft is about 15 in cruise, which is much higher than that of the optimum airfoil obtained by this black-box optimization. Two reasons for this are as follows.
-
Jukowski transformation
For the sake of simplicity, the Joukowski transformation, which can define the airfoil shape with three parameters, was used in this study, but the airfoil shape obtained by this transformation is not necessarily the one that achieves a high lift-drag ratio.
-
Aspect ratio of the airfoil
To increase the lift-drag ratio, the aspect ratio should be increased (thinner and longer airfoils), but simulating thinner and longer objects requires higher resolution, which in turn requires higher computational cost. In this case, the default range of possible mapping parameters has been adjusted accordingly to allow for relatively thick airfoils (small aspect ratio) in relation to the length of the airfoil.
In any case, within the framework of the aspect ratio and Joukowski transformation considered in this example program, we could perform black-box optimization to find the optimal airfoil profile.