Parallel Solver Execution¶
parallel_solve()
can send queries to multiple solvers and models simultaneously. Such parallel execution may hide processing times for model transformations and request data creation and data transfer time due to network access to the solver when multiple runs are required. Also, for solvers that can run multiple problems simultaneously, parallel execution is expected to improve execution efficiency.
Parallel execution example¶
First, construct a model as if using the solve()
function.
from amplify import VariableGenerator, one_hot, FixstarsClient, solve
gen = VariableGenerator()
q = gen.array("Binary", 3)
objective = q[0] * q[1] - q[2]
constraint = one_hot(q)
model = objective + constraint
Next, we create multiple solver clients.
from amplify import FixstarsClient, DWaveSamplerClient
from datetime import timedelta
fixstars_client = FixstarsClient()
# fixstars_client.token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
fixstars_client.parameters.timeout = timedelta(milliseconds=1000)
dwave_client = DWaveSamplerClient()
# dwave_client.token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
dwave_client.parameters.num_reads = 100
The parallel_solve()
function is used to send requests to multiple solvers. parallel_solve()
has the same interface as solve()
, but allows a list of arguments.
from amplify import parallel_solve
fixstars_result, dwave_result = parallel_solve(model, [fixstars_client, dwave_client])
The code above has the same effect as running the following in parallel. Thus, fixstars_result
and dwave_result
are instances of the Result()
class.
from amplify import solve
fixstars_result = solve(model, fixstars_client)
dwave_result = solve(model, dwave_client)
Tip
It is possible to run different models in parallel with the same solver or different models with different solvers by passing a list of models to parallel_solve()
.
model1 = objective + constraint
model2 = objective + 2 * constraint
- To solve a single model with multiple solvers in parallel:
result1, result2 = parallel_solve(model1, [fixstars_client, dwave_client])
- To solve multiple models in parallel with a single solver:
result1, result2 = parallel_solve([model1, model2], fixstars_client)
- To solve multiple models in parallel with multiple solvers:
result1, result2 = parallel_solve([model1, model2], [fixstars_client, dwave_client])
Parallel execution parameters¶
parallel_solve()
accepts keyword arguments similar to solve()
. Usually, keyword arguments are common for all parallel runs, but by providing a list, multiple keyword arguments can be specified simultaneously, as with multiple models and solvers. In this case, the number of elements in each list must be the same.
As an example, the following parameters are specified using lists,
fixstars_result, dwave_result = parallel_solve(
model,
[fixstars_client, dwave_client],
dry_run=[False, True],
num_solves=2,
)
and the above has the same effect as if the following is run in parallel.
fixstars_result = solve(model, fixstars_client, dry_run=False, num_solves=2)
dwave_result = solve(model, dwave_client, dry_run=True, num_solves=2)
Depending on the solver type and contract, sending multiple requests to the same solver simultaneously may not be possible; the maximum number of parallel runs can be set by specifying the concurrency
of parallel_solve()
. The default is 0, where the number of parallel runs is automatically determined by the number of CPUs on the machine running it.
fixstars_result, dwave_result = parallel_solve(
model,
[fixstars_client, dwave_client, fixstars_client, dwave_client],
concurrency=2,
)