# Parallel Solver Execution {py:func}`~amplify.parallel_solve` can send queries to multiple clients and models simultaneously. Such parallel execution may hide processing times for model transformations and request data creation and data transfer time due to network access to the solver when multiple runs are required. Also, for solvers that can run multiple problems simultaneously, parallel execution is expected to improve execution efficiency. ## Parallel execution example First, construct a model as if using the {py:func}`~amplify.solve` function. We will also show examples using multiple models later, so here we will build two models. ```{testcode} from amplify import VariableGenerator, one_hot, solve gen = VariableGenerator() q = gen.array("Binary", 3) objective = q[0] * q[1] - q[2] constraint = one_hot(q) model1 = objective + constraint model2 = objective + 2 * constraint ``` Next, we create multiple solver clients. ```{testcode} from amplify import AmplifyAEClient, DWaveSamplerClient from datetime import timedelta amplify_client = AmplifyAEClient() # amplify_client.token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" amplify_client.parameters.time_limit_ms = timedelta(milliseconds=1000) dwave_client = DWaveSamplerClient() # dwave_client.token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" dwave_client.parameters.num_reads = 100 ``` The {py:func}`~amplify.parallel_solve` function is used to send requests to multiple clients. {py:func}`~amplify.parallel_solve` has the same interface as {py:func}`~amplify.solve`, but allows a list of arguments. ```python from amplify import parallel_solve amplify_result, dwave_result = parallel_solve(model1, [amplify_client, dwave_client]) ``` The code above has the same effect as running the following `for`{l=python}-loop in parallel. ```python from amplify import solve for client in [amplify_client, dwave_client]: result = solve(model1, client) ``` {py:func}`~amplify.parallel_solve` can also accept lists for both the model and client arguments. For example, you can use {py:func}`~amplify.parallel_solve` to run multiple models and multiple clients concurrently as follows: ```{python} result1, result2 = parallel_solve([model1, model2], [amplify_client, dwave_client]) ``` Here, the code has the same effect as running the following `for`{l=python}-loop in parallel. ```python for model, client in zip([model1, model2], [amplify_client, dwave_client], strict=True): result = solve(model, client) ``` ```{note} When both the model and client arguments are given as lists, the number of elements in each list must be the same. ``` If one of the arguments is a scalar value, it is treated as a list where the same value is repeated for each element of the other argument's list. In the first example, multiple clients were used for a single model, but you can also use a single client for multiple models as follows: ```python # The following is equivalent to parallel_solve([model1, model2], [amplify_client] * 2). result1, result2 = parallel_solve([model1, model2], amplify_client) # The following is equivalent to parallel_solve([model1] * 2, [amplify_client, dwave_client]). result1, result2 = parallel_solve(model1, [amplify_client, dwave_client]) ``` ## Parallel execution parameters {py:func}`~amplify.parallel_solve` accepts keyword arguments similar to {py:func}`~amplify.solve`. Usually, keyword arguments are common for all parallel runs, but by providing a list, multiple keyword arguments can be specified simultaneously, as with multiple models and clients. In this case as well, the number of elements in the list-type keyword arguments must be the same as that of the model and client arguments. As an example, consider the following parameter specification using lists: ```python amplify_result, dwave_result = parallel_solve( model1, [amplify_client, dwave_client], dry_run=[False, True], num_solves=2, ) ``` The above has the same effect as running the following code in parallel. ```python for client, dry_run in zip([amplify_client, dwave_client], [False, True], strict=True): result = solve(model1, client, dry_run=dry_run, num_solves=2) ``` Depending on the solver type and contract, sending multiple requests to the same client simultaneously may not be possible; the maximum number of parallel runs can be set by specifying the `concurrency` parameter of {py:func}`~amplify.parallel_solve`. The default is 0, where the number of parallel runs is automatically determined by the number of CPUs on the machine running it. ```python amplify_result, dwave_result = parallel_solve( model1, [amplify_client, dwave_client], concurrency=2, ) ```