# Parallel Solver Execution {py:func}`~amplify.parallel_solve` can send queries to multiple solvers and models simultaneously. Such parallel execution may hide processing times for model transformations and request data creation and data transfer time due to network access to the solver when multiple runs are required. Also, for solvers that can run multiple problems simultaneously, parallel execution is expected to improve execution efficiency. ## Parallel execution example First, construct a model as if using the {py:func}`~amplify.solve` function. ```{testcode} from amplify import VariableGenerator, one_hot, FixstarsClient, solve gen = VariableGenerator() q = gen.array("Binary", 3) objective = q[0] * q[1] - q[2] constraint = one_hot(q) model = objective + constraint ``` Next, we create multiple solver clients. ```{testcode} from amplify import FixstarsClient, DWaveSamplerClient from datetime import timedelta fixstars_client = FixstarsClient() # fixstars_client.token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" fixstars_client.parameters.timeout = timedelta(milliseconds=1000) dwave_client = DWaveSamplerClient() # dwave_client.token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" dwave_client.parameters.num_reads = 100 ``` The {py:func}`~amplify.parallel_solve` function is used to send requests to multiple solvers. {py:func}`~amplify.parallel_solve` has the same interface as {py:func}`~amplify.solve`, but allows a list of arguments. ```python from amplify import parallel_solve fixstars_result, dwave_result = parallel_solve(model, [fixstars_client, dwave_client]) ``` The code above has the same effect as running the following in parallel. Thus, `fixstars_result` and `dwave_result` are instances of the {py:func}`~amplify.Result` class. ```python from amplify import solve fixstars_result = solve(model, fixstars_client) dwave_result = solve(model, dwave_client) ``` ````{tip} It is possible to run different models in parallel with the same solver or different models with different solvers by passing a list of models to {py:func}`~amplify.parallel_solve`. ```{testcode} model1 = objective + constraint model2 = objective + 2 * constraint ``` To solve a single model with multiple solvers in parallel: : ```python result1, result2 = parallel_solve(model1, [fixstars_client, dwave_client]) ``` To solve multiple models in parallel with a single solver: : ```python result1, result2 = parallel_solve([model1, model2], fixstars_client) ``` To solve multiple models in parallel with multiple solvers: : ```python result1, result2 = parallel_solve([model1, model2], [fixstars_client, dwave_client]) ``` ```` ## Parallel execution parameters {py:func}`~amplify.parallel_solve` accepts keyword arguments similar to {py:func}`~amplify.solve`. Usually, keyword arguments are common for all parallel runs, but by providing a list, multiple keyword arguments can be specified simultaneously, as with multiple models and solvers. In this case, the number of elements in each list must be the same. As an example, the following parameters are specified using lists, ```python fixstars_result, dwave_result = parallel_solve( model, [fixstars_client, dwave_client], dry_run=[False, True], num_solves=2, ) ``` and the above has the same effect as if the following is run in parallel. ```python fixstars_result = solve(model, fixstars_client, dry_run=False, num_solves=2) dwave_result = solve(model, dwave_client, dry_run=True, num_solves=2) ``` Depending on the solver type and contract, sending multiple requests to the same solver simultaneously may not be possible; the maximum number of parallel runs can be set by specifying the `concurrency` of {py:func}`~amplify.parallel_solve`. The default is 0, where the number of parallel runs is automatically determined by the number of CPUs on the machine running it. ```python fixstars_result, dwave_result = parallel_solve( model, [fixstars_client, dwave_client, fixstars_client, dwave_client], concurrency=2, ) ```