Black-Box Optimization with Quantum Annealing and Ising Machines¶
This code example introduces a method of black-box optimization, which may be called Factorization Machines and Quantum Annealing (FMQA)$^*$.
For this purpose, the sample code considers an algebraic expression as an unknown black-box function and estimates the input values so that its output value is minimized. For examples of FMQA in more realistic model cases and sample code, please see the following links.
- Black-Box Optimization Exploration of Model Superconducting Materials
- Black-Box Optimization of Operating Condition in a Chemical Reactor
- Black-Box Optimization of Airfoil Geometry by Fluid Flow Simulation
If the optimization target is a known function and quadratic, it can be optimized directly in the form of Quadratic Unconstrained Binary Optimization (QUBO) form. For a description of such the QUBO problem, sample code, and instructions on how to use Amplify, please see the following links (excerpts).
This notebook is organized as follows:
- 1. Introduction to FMQA
- 1.1. Black-box optimization
- 1.2. Bayesian optimization
- 1.3. FMQA introduction
- 1.4. FMQA procedure
- 2. FMQA program implementation
- 3. FMQA execution example
- 4. References
$^*$ FMQA may be inappropriate when an Ising machine other than quantum annealing machine is used for the search. However, since the methodology for this type of black-box optimization is consistent regardless of the type of Ising machines used, "FMQA" will be used throughout this tutorial. Note that by using Amplify it is straightforward to switch between various annealing machines.
1. Introduction to FMQA¶
1.1. Black-box optimization¶
FMQA is a black-box optimization method similar to Bayesian optimization. Usually, in mathematical optimization, the objective is to estimate a decision variable $\boldsymbol{x}$ such that the objective function $f(\boldsymbol{x})$ in interest is minimized (or maximized). Here, if information about the objective function $f(\boldsymbol{x})$ (functional form, gradient, submodularity, convexity, etc.) is given, efficient optimization can be performed.
$$ \begin{aligned} \mathrm{Minimize}&\,\,f(\boldsymbol{x}) \\ \mathrm{subject\,\,to\,\,}&\boldsymbol{x} \in [0,1]^D \end{aligned} $$
For example, suppose the function $f(\boldsymbol{x})$ is known (and is quadratic in $\boldsymbol{x}$), as in some optimization problems shown in the Amplify demo tutorial. In such a case, $f(\boldsymbol{x})$ can be used as the objective function to perform the optimization directly as a quadratic unconstrained binary optimization (QUBO: Quadratic Unconstrained Binary Optimization) problem.
Here, a binary variable vector with a size $D$ is assumed for $\boldsymbol{x}$. However, non-binary variables can be used in FMQA by using one-hot encoding for example. Such an example can be found in Black-Box Optimization of Airfoil Geometry with Fluid Flow Simulation.
On the other hand, in the case of optimization to minimize (or maximize) values obtained by simulation or experiment for physical or social phenomena, the objective function $f(\boldsymbol{x})$ corresponds to simulation or experiment, and the function cannot be described explicitly. Mathematical optimization for such an unknown objective function $f(\boldsymbol{x})$ is called black-box optimization.
In addition, evaluating such an objective function (running simulations or experiments) is usually relatively expensive (in terms of time and money, etc). Therefore, even if the set of decision variables is finite, optimization by full search is generally difficult. Therefore, an optimization method with as few objective function evaluations as possible is required.
1.2. Bayesian optimization¶
In Bayesian optimization, black-box optimization is performed by repeating the following optimization cycle.
- Construct an acquisition function $g(\boldsymbol{x})$ from the training data.
- Estimate the point $\hat{\boldsymbol{x}}$ where the acquisition function $g(\boldsymbol{x})$ is minimized.
- Add the evaluation result $(\hat{\boldsymbol{x}}, \hat{y})$ of the objective function $\hat{y} = f(\hat{\boldsymbol{x}})$ to the training data
By repeating this cycle, the prediction accuracy of the acquisition function $g(\boldsymbol{x})$ improves near the optimization point, and as a result, the resulting $\hat{\boldsymbol{x}}$ is expected to be close to the true decision variable that minimizes the objective function $f(\boldsymbol{x})$. However, there are following two challenges in the Bayesian optimization cycle:
- Construct an acquisition function $g(\boldsymbol{x})$
- Estimate $\hat{\boldsymbol{x}}$ to minimize the acquisition function
FMQA, described below, is a general framework that solves these two challenges in Bayesian optimization and implements black-box optimization.
1.3. FMQA introduction¶
Consider using the following Factorization Machine (FM), a type of machine learning model, as the acquisition function $g(\boldsymbol{x})$ required in Bayesian optimization.
$$ \begin{aligned} g(\boldsymbol{x} | \boldsymbol{w}, \boldsymbol{v}) &= w_0 + \langle \boldsymbol{w}, \boldsymbol{x}\rangle + \sum_{i=1}^D \sum_{j=i+1}^D \langle \boldsymbol{v}_i, \boldsymbol{v}_j \rangle x_i x_j \\ &=w_0 + \sum_{i=1}^D w_i x_i + \sum_{i=1}^D \sum_{j=i+1}^D \sum_{f=1}^k v_{if}v_{jf}x_ix_j \\ &=w_0 + \sum_{i=1}^D w_i x_i + \frac{1}{2}\sum_{f=1}^k\left(\left(\sum_{i=1}^D v_{i f} x_i\right)^2 - \sum_{i=1}^D v_{i f}^2 x_i^2\right) \end{aligned} $$
Since FM is quadratic in $\boldsymbol{x}$, the above equation yields a functional form that can be optimized by QUBO. The parameters, $\boldsymbol{w}$ and $\boldsymbol{v}$ ($v_{ij}$, $w_i$), in the equation, are FM parameters (weights or biases in a machine learning context) obtained after training the model in the above equation, and $k$ is a hyperparameter.
The number of FM parameters depends on the hyperparameter $k$. When $k=D$, FM has the same degrees of freedom as the QUBO interaction terms, while a smaller $k$ has the effect of reducing the number of FM parameters and suppressing overlearning.
Thus, using FM as the acquisition function $g(\boldsymbol{x})$ and performing its optimization by using quantum annealing (QA) or Ising machines solves the aforementioned issues and can be applied to general problems. This black-box optimization method that combines quantum annealing and Ising machines with machine learning is called FMQA.
1.4. FMQA procedure¶
The FMQA procedure is similar to the Bayesian optimization cycle described above as follows:
First, the number of objective function evaluations $N$ that can be performed during the optimization process is determined based on the cost (time, money, etc) required for the evaluation and the available resources. For example, if an objective function evaluation (experiment or simulation) takes one hour, and the FMQA optimization must be completed in one day, the maximum number of evaluations is considered to be $N=24$. Then, we determine the number of initial training data samples $N_0$ such that $N_0<N$, and prepare the initial training data as follows. Finally, we run the FMQA cycle for $N-N_0$ times.
-
Preparation of initial training data ($N_0$ samples)
- Prepare $N_0$ input samples $\{\boldsymbol{x}_1, \boldsymbol{x}_2, \cdots, \boldsymbol{x}_{N_0}\}$ and the corresponding $N_0$ outputs $\{f(\boldsymbol{x}_1 ), f(\boldsymbol{x}_2), \cdots, \boldsymbol{x}_{N_0}\}$ as initial training data.
-
FMQA optimization cycle ($N-N_0$ times)
- Train the FM model using the (most recent) training data and obtain the FM parameters $(\boldsymbol{v}, \boldsymbol{w})$.
- Estimate the input $\hat{\boldsymbol{x}}$ that minimizes the acquisition function $g(\boldsymbol{x})$ by using Amplify.
- Evaluate the objective function $f(\boldsymbol{x})$ with $\hat{\boldsymbol{x}}$ to obtain $\hat{y} = f(\hat{\boldsymbol{x}})$.
- Add $(\hat{\boldsymbol{x}}, \hat{y})$ to the training data.
Repeat steps 1-4 above for $N-N_0$ times.
As the FMQA process progresses, the prediction accuracy of the FM is expected to improve near the optimization point and a better estimate of $\hat{\boldsymbol{x}}$ is expected.
2.1. Random seed initialization¶
We define a function seed_everything()
to initialize random seed values to ensure that
the
machine learning results do not change with each run.
import os
import torch
import numpy as np
def seed_everything(seed=0):
os.environ["PYTHONHASHSEED"] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
2.2. Configuration of Amplify client¶
Here, we create an Amplify client and set the necessary parameters. In the following, we set the timeout for a single search by the Ising machine to 1 second.
from amplify.client import FixstarsClient
client = FixstarsClient()
client.parameters.timeout = 1000 # Timeout 1s
# client.token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # If you use Amplify in a local environment, enter the Amplify API token.
2.3. Implementing FM with PyTorch¶
In this example code, FM is implemented with PyTorch. In the TorchFM
class, we define
the
acquisition function $g(\boldsymbol{x})$ as a machine learning model. Each term in $g(\boldsymbol{x})$
corresponds directly to out_lin
, out_1
, out_2
, and
out_inter
in the TorchFM
class, as in the following equation.
$$ \begin{aligned} g(\boldsymbol{x} | \boldsymbol{w}, \boldsymbol{v}) &= \underset{\color{red}{\mathtt{out\_lin}}}{\underline{ w_0 + \sum_{i=1}^D w_i x_i} } + \underset{\color{red}{\mathtt{out\_inter}}}{\underline{\frac{1}{2} \left[\underset{\color{red}{\mathtt{out\_1}}}{\underline{ \sum_{f=1}^k\left(\sum_{i=1}^D v_{i f} x_i\right)^2 }} - \underset{\color{red}{\mathtt{out\_2}}}{\underline{ \sum_{f=1}^k\sum_{i=1}^D v_{i f}^2 x_i^2 }} \right] }} \end{aligned} $$
import torch.nn as nn
class TorchFM(nn.Module):
def __init__(self, d: int, k: int):
super().__init__()
self.V = nn.Parameter(torch.randn(d, k), requires_grad=True)
self.lin = nn.Linear(
d, 1
) # The first and second terms on the right-hand side are fully connected network
def forward(self, x):
out_1 = torch.matmul(x, self.V).pow(2).sum(1, keepdim=True)
out_2 = torch.matmul(x.pow(2), self.V.pow(2)).sum(1, keepdim=True)
out_inter = 0.5 * (out_1 - out_2)
out_lin = self.lin(x)
out = out_inter + out_lin
return out
Next, a function train()
is defined to train the FM based on the training data sets. As
in
general machine learning, this function divides the data sets into training data and validation data,
then
optimizes the FM parameters using the training data, and validates the model during training using the
validation data. The train()
function returns the model with the highest prediction
accuracy
for the validation data.
from torch.utils.data import TensorDataset, DataLoader
from sklearn.model_selection import train_test_split
import copy
def train(
X,
y,
model_class=None,
model_params=None,
batch_size=1024,
epochs=3000,
criterion=None,
optimizer_class=None,
opt_params=None,
lr_sche_class=None,
lr_sche_params=None,
):
X_tensor, y_tensor = (
torch.from_numpy(X).float(),
torch.from_numpy(y).float(),
)
indices = np.array(range(X.shape[0]))
indices_train, indices_valid = train_test_split(
indices, test_size=0.2, random_state=42
)
train_set = TensorDataset(X_tensor[indices_train], y_tensor[indices_train])
valid_set = TensorDataset(X_tensor[indices_valid], y_tensor[indices_valid])
loaders = {
"train": DataLoader(train_set, batch_size=batch_size, shuffle=True),
"valid": DataLoader(valid_set, batch_size=batch_size, shuffle=False),
}
model = model_class(**model_params)
best_model_wts = copy.deepcopy(model.state_dict())
optimizer = optimizer_class(model.parameters(), **opt_params)
if lr_sche_class is not None:
scheduler = lr_sche_class(optimizer, **lr_sche_params)
best_score = 1e18
for epoch in range(epochs):
losses = {"train": 0.0, "valid": 0.0}
for phase in ["train", "valid"]:
if phase == "train":
model.train()
else:
model.eval()
for batch_x, batch_y in loaders[phase]:
optimizer.zero_grad()
out = model(batch_x).T[0]
loss = criterion(out, batch_y)
losses[phase] += loss.item() * batch_x.size(0)
with torch.set_grad_enabled(phase == "train"):
if phase == "train":
loss.backward()
optimizer.step()
losses[phase] /= len(loaders[phase].dataset)
with torch.no_grad():
model.eval()
if best_score > losses["valid"]:
best_model_wts = copy.deepcopy(model.state_dict())
best_score = losses["valid"]
if lr_sche_class is not None:
scheduler.step()
with torch.no_grad():
model.load_state_dict(best_model_wts)
model.eval()
return model
2.4. Construction of initial training data¶
The gen_training_data
function evaluates the objective function $f(\boldsymbol{x})$
against
the input value $\boldsymbol{x}$ to produce $N_0$ input-output pairs (initial training data). The
input
value $\boldsymbol{x}$ can be determined in a variety of ways, such as by using a random number or a
value
suitable for machine learning based on prior knowledge. You can also build up the training data from
the
results of previous experiments or simulations.
def gen_training_data(D: int, N0: int, true_func):
assert N0 < 2**D
# N0 input values are obtained using random numbers
X = np.random.randint(0, 2, size=(N0, D))
# Remove duplicate input values and add new input values using random numbers
X = np.unique(X, axis=0)
while X.shape[0] != N0:
X = np.vstack((X, np.random.randint(0, 2, size=(N0 - X.shape[0], D))))
X = np.unique(X, axis=0)
y = np.zeros(N0)
# Obtain output values corresponding to N0 input values by evaluating the objective function, true_func
for i in range(N0):
if i % 10 == 0:
print(f"Generating {i}-th training data set.")
y[i] = true_func(X[i])
return X, y
2.5. Execution class for FMQA cycle¶
FMQA.cycle()
executes an FMQA cycle that is performed for $N-N_0$ times using the
pre-prepared initial training data. FMQA.step()
is a function that executes only one FMQA
cycle, and is called $N-N_0$ times by FMQA.cycle()
.
from amplify import (
Solver,
BinarySymbolGenerator,
sum_poly,
BinaryMatrix,
BinaryQuadraticModel,
)
import matplotlib.pyplot as plt
import sys
class FMQA:
def __init__(self, D: int, N: int, N0: int, k: int, true_func, solver) -> None:
assert N0 < N
self.D = D
self.N = N
self.N0 = N0
self.k = k
self.true_func = true_func
self.solver = solver
self.y = None
# A member function that repeatedly performs (N-N0)x FMQA based on the training data with adding new training data
def cycle(self, X, y, log=False) -> np.ndarray:
print(f"Starting FMQA cycles...")
pred_x = X[0]
pred_y = 1e18
for i in range(self.N - self.N0):
print(f"FMQA Cycle #{i} ", end="")
try:
x_hat = self.step(X, y)
except RuntimeError:
sys.exit(f"Unknown error, i = {i}")
# If an input value identical to the found x_hat already exists in the current training data set, a neighboring value is used as a new x_hat.
is_identical = True
while is_identical:
is_identical = False
for j in range(i + self.N0):
if np.all(x_hat == X[j, :]):
change_id = np.random.randint(0, self.D, 1)
x_hat[change_id.item()] = 1 - x_hat[change_id.item()]
if log:
print(f"{i=}, Identical x is found, {x_hat=}")
is_identical = True
break
# Evaluate objective function f() with x_hat
y_hat = self.true_func(x_hat)
# Add an input-output pair [x_hat, y_hat] to the training data set
X = np.vstack((X, x_hat))
y = np.append(y, y_hat)
# Copy the input-output pair to [pred_x, pred_y] when the evaluated value of the objective function updates the minimum value
if pred_y > y_hat:
pred_y = y_hat
pred_x = x_hat
print(f"variable updated, {pred_y=}")
else:
print("")
# Exit the "for" statement if all inputs have been fully explored
if len(y) >= 2**self.D:
print(f"Fully searched at {i=}. Terminating FMQA cycles.")
break
self.y = y
return pred_x
# Member function to perform one FMQA cycle
def step(self, X, y) -> np.ndarray:
# Train FM
model = train(
X,
y,
model_class=TorchFM,
model_params={"d": self.D, "k": self.k},
batch_size=8,
epochs=2000,
criterion=nn.MSELoss(),
optimizer_class=torch.optim.AdamW,
opt_params={"lr": 1},
)
# Extract FM parameters from the trained FM model
v, w, w0 = list(model.parameters())
v = v.detach().numpy()
w = w.detach().numpy()[0]
w0 = w0.detach().numpy()[0]
# Solve a QUBO problem using a quantum annealing or Ising machine
gen = BinarySymbolGenerator() # Declare a variable generator, BinaryPoly
q = gen.array(self.D) # Generate decision variables using BinaryPoly
cost = self.__FM_as_QUBO(
q, w0, w, v
) # Define FM as a QUBO equation from FM parameters
result = self.solver.solve(
cost
) # Pass the objective function to Amplify solver
if len(result.solutions) == 0:
raise RuntimeError("No solution was found.")
values = result.solutions[0].values
q_values = q.decode(values)
return q_values
# A function that defines FM as a QUBO equation from FM parameters. As with the previously defined TorchFM class, the formula is written as per the acquisition function form of g(x).
def __FM_as_QUBO(self, x, w0, w, v):
lin = w0 + (x.T @ w)
D = w.shape[0]
out_1 = sum_poly(self.k, lambda i: sum_poly(D, lambda j: x[j] * v[j, i]) ** 2)
# Note that x[j] = x[j]^2 since x[j] is a binary variable in the following equation.
out_2 = sum_poly(
self.k, lambda i: sum_poly(D, lambda j: x[j] * v[j, i] * v[j, i])
)
return lin + (out_1 - out_2) / 2
"""The sum_poly used in __FM_as_QUBO above is inefficient in terms of computation speed and memory. In the case of FM,
where the interaction terms of the decision variables are generally nonzero, the following implementation using BinaryMatrix
is more efficient. Here, the quadratic terms in BinaryMatrix correspond to the non-diagonal terms represented by the upper
triangular matrix, so x(1/2) for the quadratic terms in the FM formula is unnecessary. Also, although x is taken as an
argument just to match the function signature with __FM_as_QUBO above (implementation using sum_poly), it is not needed in
this implementation using BinaryMatrix.
def __FM_as_QUBO(self, x, w0, w, v):
out_1_matrix = v @ v.T
out_2_matrix = np.diag((v * v).sum(axis=1))
matrix = BinaryMatrix(out_1_matrix - out_2_matrix + np.diag(w))
# Do not forget to put the constant term w0 in the second argument of BinaryQuadraticModel.
model = BinaryQuadraticModel(matrix, w0)
return model
"""
# A function to plot the history of i-th objective function evaluations performed within the initial training data construction (blue) and during FMQA cycles (red).
def plot_history(self):
assert self.y is not None
fig = plt.figure(figsize=(6, 4))
plt.plot(
[i for i in range(self.N0)],
self.y[: self.N0],
marker="o",
linestyle="-",
color="b",
) # Objective function evaluation values at the time of initial training data generation (random process)
plt.plot(
[i for i in range(self.N0, self.N)],
self.y[self.N0 :],
marker="o",
linestyle="-",
color="r",
) # Objective function evaluation values during the FMQA cycles (FMQA cycle process)
plt.xlabel("i-th evaluation of f(x)", fontsize=18)
plt.ylabel("f(x)", fontsize=18)
plt.tick_params(labelsize=18)
return fig
3. FMQA execution example¶
3.1. Optimization for quadratic expressions of $\boldsymbol{x}$¶
Let us perform black-box optimization using FMQA. While FMQA is usually applied for an objective function that is black-box and expensive to evaluate, this tutorial considers the following algebraic expression as the objective function for simplicity and for explanation.
$$ f(\boldsymbol{x}) = \boldsymbol{x}^T Q \boldsymbol{x} $$
Here, $Q$ is a $d\times d$ matrix, whose components have zero mean, and they are generated by the
random
numbers as defined in make_Q
. The above $f(\boldsymbol{x})$, which is a known function,
is
treated as an unknown function (=black-box).
Also, note that under the following conditions ($D=100$, $N=100$, $N_0=70$), it will take several minutes to complete all FMQA cycles. An example output is shown in "3.3. Example output from this FMQA sample program".
# Output a d-dimensional symmetric matrix whose components have zero mean
def make_Q(d) -> np.ndarray:
Q_true = np.random.rand(d, d)
Q_true = (Q_true + Q_true.T) / 2
Q_true = Q_true - np.mean(Q_true)
return Q_true
# Initialize random seed values
seed_everything(0)
# Size of input values (problem size)
D = 100
# Matrix Q used in the "true function"
Q = make_Q(D)
def true_func(x):
# Definition of the objective function (xQx).
# Essentially, cost is the result value of the unknown function (simulation or experiment) or the result value of the subsequent process.
cost = x @ Q @ x
return cost
N = 70 # Number of times the function can be evaluated
N0 = 60 # Number of samples of initial training data
k = 10 # Dimension of the vector in FM (hyperparameters)
# client: Amplify client created earlier
solver = Solver(client)
# Generate initial training data
X, y = gen_training_data(D, N0, true_func)
# Instantiate FMQA class
fmqa_solver = FMQA(D, N, N0, k, true_func, solver)
# Run FMQA cycle
pred_x = fmqa_solver.cycle(X, y)
# Output optimization results
print("pred x:", pred_x)
print("pred value:", true_func(pred_x))
3.2. Transition of objective function values during the FMQA optimization process¶
The following line displays the evolution of the objective function values during the FMQA optimization process. The initial $N_0$ objective function values (blue line) are obtained from randomly generated input values during initial training data generation. The following red line shows the objective function values during the $N-N_0$ FMQA optimization cycles.
The input value $\hat{x}$ obtained from the FMQA optimization cycles shows that the minimum value of the objective function is successively updated (see the output example in "3.3. Example output from this FMQA sample program").
fig = fmqa_solver.plot_history()
3.3. Example output from this FMQA sample program¶
In general, due to the principle of the heuristic algorithm used in FixstarsClient
, the
solutions obtained are not completely reproducible, but typical standard output and image output
obtained
when running this sample code are shown below. The values obtained may vary slightly from run to run.
-
When the FMQA code described in "3.1. Optimization for quadratic expressions of $\boldsymbol{x}$" is run under the given conditions, the following standard output is sequentially produced.
Generating 0-th training data set. Generating 10-th training data set. Generating 20-th training data set. Generating 30-th training data set. Generating 40-th training data set. Generating 50-th training data set. Starting FMQA cycles... FMQA Cycle #0 variable updated, pred_y=-59.15752919611154 FMQA Cycle #1 FMQA Cycle #2 variable updated, pred_y=-72.66802296872575 FMQA Cycle #3 FMQA Cycle #4 FMQA Cycle #5 FMQA Cycle #6 FMQA Cycle #7 FMQA Cycle #8 variable updated, pred_y=-76.81540215271143 FMQA Cycle #9 pred x: [0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1. 0. 1. 0. 1. 1. 1. 1. 1. 1. 1. 0. 1. 0. 1. 0. 0. 0. 0. 0. 1. 1. 1. 0. 1. 1. 1. 1. 0. 1. 1. 1. 0. 1. 1. 0. 1. 1. 2. 1. 1. 1. 0. 1. 0. 1. 0. 1. 0. 0. 0. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 3. 0. 1. 1. 0. 1. 0. 0. 1. 0. 1. 1. 0. 1. 0. 1. 0. 1. 0. 1. 1. 0. 1. 0. 4. 0. 0. 1.] pred value: -76.81540215271143
-
The output image of
fmqa_solver.plot_history()
described in "3.2. Transition of objective function values during FMQA optimization process" is shown below.
3.4. Summary¶
In this tutorial, so-called FMQA optimization was performed on a relatively simple known function. Amplify also provides examples and sample code for more realistic model cases.
3.5. Appendix¶
Since $f(x) = x^{\top}Qx$ is a quadratic equation and the formula is known, it is possible to search for optimal input values directly by quantum annealing or Ising machines without using FMQA. The code below optimizes this function directly using QUBO.
# Declare a variable generator, BinaryPoly
gen = BinarySymbolGenerator()
# Create 1D array of decision variables with size D
q = gen.array(D)
# Formulate xQx as the objective function of QUBO
cost = sum_poly(D, lambda i: sum_poly(D, lambda j: Q[i, j] * q[i] * q[j]))
# Pass the objective function to Amplify for solution seeking.
result = solver.solve(cost)
if len(result.solutions) == 0:
raise RuntimeError("No solution was found.")
# Extract and display the estimated optimal solution.
values = result.solutions[0].values
true_x = q.decode(values)
print("true x:", true_x)
print("true value:", true_func(true_x))
4. References¶
The present black-box optimization method that combines quantum annealing and Ising machines with machine learning is called FMQA, which has been originally proposed as FMQA in the following research.
- K. Kitai, J. Guo, S. Ju, S. Tanaka, K. Tsuda, J. Shiomi, and R. Tamura, "Designing metamaterials with quantum annealing and factorization machines", Physical Review Research 2, 013319 (2020).
In this study, the search for "metamaterials" is carried out using FMQA, which also have shown superior performance compared to Bayesian optimization, a conventional black-box optimization method.
In the following study, the same black-box optimization method is also applied to the design of photonic crystals.
- T. Inoue, Y. Seki, S. Tanaka, N. Togawa, K. Ishizaki, and S. Noda, "Towards optimization of photonic-crystal surface-emitting lasers via quantum annealing," Opt. Express 30, 43503-43512 (2022).
These studies suggest that this optimization method (FMQA), based on FM and combinatorial optimization, may have general applicability in black-box optimization problems in various fields. In Fixstars Amplify, there are several examples of such black-box optimization in the areas of chemical reaction, fluid dynamics, as well as material search, as follows: