FAQ¶
Where can I find references for FMQA and kernel-QA?¶
FMQA
K. Kitai, J. Guo, S. Ju, S. Tanaka, K. Tsuda, J. Shiomi, and R. Tamura, “Designing metamaterials with quantum annealing and factorization machines”, Physical Review Research 2, 013319 (2020).
T. Inoue, Y. Seki, S. Tanaka, N. Togawa, K. Ishizaki, and S. Noda, “Towards optimization of photonic-crystal surface-emitting lasers via quantum annealing,” Optics Express 30, 43503-43512 (2022).
Kernel-QA
Y. Minamoto and Y. Sakamoto, “A black-box optimization method with polynomial-based kernels and quadratic-optimization annealing”, to be published.
My optimization does not obtain better solutions with cycles. What should I do?¶
In FMQA or kernel-QA, the surrogate model is the key. The surrogate model is constructed using machine learning in FMQA and analytically in kernel-QA. In both methods, we suggest you check the cross-correlation coefficient between the output values in the training data and the corresponding prediction values by the constructed surrogate model. Then, ensure that the correlation coefficient shows a positive correlation (ideally close to +1). Such a correlation coefficient is displayed by default at every optimization cycle (see the value next to model corrcoef:
):
amplify-bbopt | 2024/10/04 06:53:01 | INFO | ----------------------------------------
amplify-bbopt | 2024/10/04 06:53:01 | INFO | #17/20 optimization cycle, constraint wt: 1.57e+02
amplify-bbopt | 2024/10/04 06:53:01 | INFO | model corrcoef: 0.855, beta: 0.0
amplify-bbopt | 2024/10/04 06:53:05 | INFO | num_iterations: 20
amplify-bbopt | 2024/10/04 06:53:05 | INFO | - [obj]: x=[2.08, -1.96, 0.020000000000000018, -1.38, 0.8999999999999999], ret=3.171e+01
amplify-bbopt | 2024/10/04 06:53:05 | INFO | y_hat=3.171e+01, best objective=2.316e+01
amplify-bbopt | 2024/10/04 06:53:05 | INFO | ----------------------------------------
amplify-bbopt | 2024/10/04 06:53:05 | INFO | #18/20 optimization cycle, constraint wt: 1.57e+02
amplify-bbopt | 2024/10/04 06:53:05 | INFO | model corrcoef: 0.857, beta: 0.0
amplify-bbopt | 2024/10/04 06:53:10 | INFO | num_iterations: 19
amplify-bbopt | 2024/10/04 06:53:10 | INFO | - [obj]: x=[2.12, 2.4800000000000004, 0.020000000000000018, -3.0, 0.8999999999999999], ret=4.508e+01
amplify-bbopt | 2024/10/04 06:53:10 | INFO | y_hat=4.508e+01, best objective=2.316e+01
amplify-bbopt | 2024/10/04 06:53:10 | INFO | ----------------------------------------
amplify-bbopt | 2024/10/04 06:53:10 | INFO | #19/20 optimization cycle, constraint wt: 1.57e+02
amplify-bbopt | 2024/10/04 06:53:10 | INFO | model corrcoef: 0.856, beta: 0.0
amplify-bbopt | 2024/10/04 06:53:17 | INFO | num_iterations: 24
amplify-bbopt | 2024/10/04 06:53:17 | INFO | - [obj]: x=[-3.0, -1.96, -3.0, 2.1799999999999997, -3.0], ret=4.165e+01
amplify-bbopt | 2024/10/04 06:53:17 | INFO | y_hat=4.165e+01, best objective=2.316e+01
...
If you expect that output values from your black-box function \(f\) show exponent-like behavior, you may want to scale \(f\) and use \(\hat{f}\) as your black-box function in Amplify-BBOpt as:
where \(c\) can be between the median and maximum values of your initial training data samples. This conversion (more or less) would linearize the output from \(f\), making the surrogate model more suitable for optimization.