# Constraint¶

This section explains the classes that represent constraints.

Before reading this section, please review the basic use of the Amplify SDK at Tutorial and how to use polynomial objects at Polynomial.

## The Constraint Objects¶

When solving problems with quantum annealing machines and Ising machines, it may be necessary to impose constraint conditions on the variables.

For example, you may need to search for the optimal solution of the objective function $$f\left(x_0, x_1, x_2 \right)$$ under some constraints, such as an equality constraint like $$x_0 + x_1 + x_2 = 1$$ or an inequality constraint like $$x_0 + x_1 + x_2 \leq 2$$.

However, QUBO and Ising models, which you need to use for quantum annealing machines and Ising machines, cannot directly handle such constraints. One basic approach to deal with constraints is to prepare a penalty function $$g$$, which takes the minimum value if and only if the constraint is satisfied, and add it to the original objective function $$f$$ with a weight $$\lambda$$.

In other words, by finding the optimal solution of $$f' = f + \lambda g \quad (\lambda > 0)$$ instead of $$f$$, a solution will be obtained where the penalty function $$g$$ take the minimum value and therefore satisfies the constraint condition. In practice, since the output of the machine is not always a solution that satisfies the constraints, you must identify that the solution is feasible by checking if $$g$$ takes the minimum value at the solution.

The Amplify SDK abstracts this procedure by using constraint objects that handle everything from construction to management, including construction of such penalty functions, weighting penalty functions, and sufficiency checks of solutions. The components managed by the constraint object are as follows:

• Constraint condition: Constraint conditions to be satisfied

• Penalty function: Penalty functions corresponding to constraint conditions

• Weight: Weights to be given to the penalty function ($$\lambda$$ in the above)

Constraint objects can be converted to logical models by giving them to BinaryQuadraticModel etc, or they can be treated as additional constraints on a polynomial or matrix object by combining them with polynomial or matrix classes. The weighted penalty function will be added to the objective function at the timing the logical model is converted to the physical model and become the input to the Ising machines.

The correspondence with polynomial and matrix classes is as follows:

## Typical Constraints¶

The Amplify SDK provides helper functions to create constraint objects for typical constraints.

• equal_to() constrains a given polynomial $$f$$ to have a constant value $$c$$.

• less_equal() constrains a given polynomial $$f$$ to be less than or equal to a constant value $$c$$. However, $$f$$ must be a polynomial that takes only integer values, and the constraint range and value range must overlap.

• greater_equal() constrains a given polynomial $$f$$ to be greater than or equal to a constant value $$c$$. However, $$f$$ must be a polynomial that takes only integer values, and the constraint range and value range must overlap.

• clamp() constrains a given polynomial $$f$$ to have a constant value between $$c_1$$ and $$c_2$$. However, $$f$$ must be a polynomial that takes only integer values, and the constraint range and value range must overlap.

Constraint

Generating function

Condition for use

$$f \left( \mathbf{q} \right) = c$$

equal_to()

$$\exists \mathbf{q}, f\left( \mathbf{q} \right) = c$$

$$f \left( \mathbf{q} \right) \le c$$

less_equal()

$$\forall \mathbf{q}, f\left( \mathbf{q} \right) \in \mathbb{Z} \land \exists \mathbf{q}, f\left( \mathbf{q} \right) \le c$$

$$f \left( \mathbf{q} \right) \ge c$$

greater_equal()

$$\forall \mathbf{q}, f\left( \mathbf{q} \right) \in \mathbb{Z} \land \exists \mathbf{q}, f\left( \mathbf{q} \right) \ge c$$

$$c_1 \le f \left( \mathbf{q} \right) \le c_2$$

clamp()

$$\forall \mathbf{q}, f\left( \mathbf{q} \right) \in \mathbb{Z} \land \exists \mathbf{q}, f\left( \mathbf{q} \right) \le c_2 \land \exists \mathbf{q}, f\left( \mathbf{q} \right) \ge c_1$$

These helper functions return a constraint object with the given constraint condition itself and a penalty function corresponding to the constraint, initialized with the weight $$1$$. Examples of using each helper function are shown below.

Note

The penalty function given by the helper function is normalized so that the minimum penalty imposed for breaking the constraint is $$1$$. Note that the appropriate weights must be given to the constraint object depending on the possible values of the objective function.

### Equality Constraint¶

For equality constraints, the equal_to() function constrains any given polynomial $$f$$ to have a constant value $$c$$. The function takes a constraint target as its first argument and a constant value as its second argument. The following example creates a constraint object that represents $$3 q_0 - q_1 - 2 q_2 + q_3 = 2$$.

from amplify import BinarySymbolGenerator
from amplify.constraint import equal_to

gen = BinarySymbolGenerator()  # Declare a BinaryPoly variable generator
q = gen.array(4)  # Generate variable array of length 4
f = 3 * q[0] - q[1] - 2 * q[2] + q[3]  # The left-hand side of the constraint f
g = equal_to(f, 2)  # Create a constraint object representing f = 2

>>> g
3 q_0 - q_1 - 2 q_2 + q_3 == 2


To find a solution that satisfies the constraint, construct a logical model from the created constraint object and solve it with a solver object constructed as follows:

from amplify import BinaryQuadraticModel, Solver
from amplify.client import FixstarsClient

client = FixstarsClient()
client.token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
client.parameters.timeout = 1000
client.parameters.outputs.duplicate = True  # Duplicate output of solutions with identical energy values
client.parameters.outputs.num_outputs = 0

solver = Solver(client)
result = solver.solve(model)

>>> for solution in result:
>>>     values = solution.values
>>>     print(f"q = {q.decode(values)}, f = {f.replace_all(values)}")
q = [1. 1. 0. 0.], f = 2.0
q = [1. 0. 1. 1.], f = 2.0


We can see from above that we got a solution that satisfies $$f = 2$$.

It is also possible to give a polynomial array as the constraint target. In such cases, the sum() method is implicitly called on the polynomial array, and the sum of the array elements will be the constraint target.

from amplify import BinarySymbolGenerator
from amplify.constraint import equal_to

gen = BinarySymbolGenerator()  # Declare a BinaryPoly variable generator
q = gen.array(4)  # Generate variable array of length 4

>>> equal_to(q, 2)
q_0 + q_1 + q_2 + q_3 == 2
>>> equal_to(q.sum(), 2)
q_0 + q_1 + q_2 + q_3 == 2


Note

For a given polynomial $$f$$ and constant $$c$$, the equal_to() function estimates $$\min f$$ and generates a penalty function $$g$$ as follows:

1. If $$\min f < c$$ :

$g = \left( f - c \right)^2$
1. If $$\min f = c$$ :

$g = f - c$

It is easy to see the penalty function $$g$$ takes $$g = 0$$ when $$f = c$$ and $$g > 0$$ otherwise.

### Inequality Constraint¶

Objects generated by the following functions constrain polynomials that take integer values to satisfy inequalities.

• less_equal() constrains a given polynomial $$f$$ to be less than or equal to the value $$c$$ ($$f \le c$$).

• greater_equal() constrains a given polynomial $$f$$ to be greater than or equal to the value $$c$$ ($$f \ge c$$).

• clamp() constrains the given polynomial $$f$$ to have a value between $$c_1$$ and $$c_2$$ ($$c_1 \le f \le c_2$$).

Note

The functions above, that correspond to inequality constraints, work as constraints on polynomials that take only integer values. Note that inequality constraints on real polynomials are not always covered. For such inequality constraints, a relaxation formulation of the constraint condition can be used. An example implementation of the relaxation formulation is given in Construct advanced constraints.

The less_equal() function takes a constraint target as its first argument and a constant value as its second argument. The greater_equal() function does the same. The clamp() function accepts a minimum and maximum value for the second and third arguments, respectively.

In the following example we create a constraint representing $$4 q_0 + 3 q_1 + 2 q_2 + q_3 \le 3$$.

from amplify import BinarySymbolGenerator
from amplify.constraint import less_equal

gen = BinarySymbolGenerator()  # Declare a BinaryPoly variable generator
q = gen.array(4)  # Generate variable array of length 4
f = 4 * q[0] + 3 * q[1] + 2 * q[2] + q[3]
g = less_equal(f, 3) # Create a constraint object representing f <= 3

>>> g
4 q_0 + 3 q_1 + 2 q_2 + q_3 <= 3


To find a solution that satisfies the constraint, construct a logical model from the created constraint object and solve it with a solver object constructed as follows.

from amplify import BinaryQuadraticModel, Solver
from amplify.client import FixstarsClient

client = FixstarsClient()
client.token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
client.parameters.timeout = 1000
client.parameters.outputs.duplicate = True  # Duplicate output of solutions with identical energy values
client.parameters.outputs.num_outputs = 0

solver = Solver(client)
result = solver.solve(model)

>>> for solution in result:
>>>     values = solution.values
>>>     print(f"q = {q.decode(values)}, f = {f.replace_all(values)}")
q = [0. 0. 0. 1.], f = 1.0
q = [0. 0. 1. 0.], f = 2.0
q = [0. 0. 0. 0.], f = 0.0
q = [0. 0. 1. 1.], f = 3.0
q = [0. 1. 0. 0.], f = 3.0


We can see we got a solution satisfying $$f \le 3$$.

The following is an example of using the greater_equal() and clamp() functions.

from amplify.constraint import greater_equal

g = greater_equal(f, 3) # Create a constraint object representing f >= 3
result = solver.solve(g)

>>> g
4 q_0 + 3 q_1 + 2 q_2 + q_3 >= 3

>>> for solution in result:
>>>     values = solution.values
>>>     print(f"q = {q.decode(values)}, f = {f.replace_all(values)}")
q = [1. 1. 0. 1.], f = 8.0
q = [1. 0. 0. 1.], f = 5.0
q = [0. 1. 1. 1.], f = 6.0
q = [1. 1. 0. 0.], f = 7.0
q = [1. 0. 1. 0.], f = 6.0
q = [1. 1. 1. 0.], f = 9.0
q = [1. 0. 1. 1.], f = 7.0
q = [1. 1. 1. 1.], f = 10.0
q = [1. 0. 0. 0.], f = 4.0
q = [0. 1. 1. 0.], f = 5.0
q = [0. 1. 0. 1.], f = 4.0
q = [0. 1. 0. 0.], f = 3.0
q = [0. 0. 1. 1.], f = 3.0

from amplify.constraint import clamp

g = clamp(f, 2, 3)  # Create constraint object representing 2 <= f <= 3
result = solver.solve(g)

>>> g
2 <= 4 q_0 + 3 q_1 + 2 q_2 + q_3 <= 3

>>> for solution in result:
>>>     values = solution.values
>>>     print(f"q = {q.decode(values)}, f = {f.replace_all(values)}")
q = [0. 0. 1. 0.], f = 2.0
q = [0. 0. 1. 1.], f = 3.0
q = [0. 1. 0. 0.], f = 3.0


If a polynomial array is given as the constraint target, the sum of the array elements will be the polynomial to be constrained, as in the equal_to() function.

from amplify import BinarySymbolGenerator
from amplify.constraint import less_equal

gen = BinarySymbolGenerator()  # Declare a BinaryPoly variable generator
q = gen.array(4)  # Generate variable array of length 4

>>> less_equal(q, 2)
q_0 + q_1 + q_2 + q_3 <= 2
>>> less_equal(q.sum(), 2)
q_0 + q_1 + q_2 + q_3 <= 2


## Formulation of the Traveling Salesman Problem¶

A typical constraint with an equality condition is the One-hot constraint. It constrains multiple variables so that only one of them is $$1$$ and the others are $$0$$. It appears when multiple variables are used to represent multiple states.

Here we explain the formulation of the constraints by taking Traveling Salesman Problem , a problem in which one-hot constraints appear, as an example.

The objective function of the Traveling Salesman Problem is as follows:

$f = \sum_{n=0}^{N-1}{\sum_{i=0}^{N-1}{\sum_{j=0}^{N-1}{ d_{ij} q_{n, i} q_{n+1, j} }}} \quad \left( q_{N,j} = q_{0,j} \right)$

Here, $$N$$ denotes the problem size (number of cities) and $$q$$ the binary variables of the objective function. $$d$$ is the distance matrix representing the distances between cities. As described in Formulation by polynomial array, the most sophisticated way to express the objective function is as follows:

import numpy as np
from amplify import BinarySymbolGenerator, sum_poly

# Problem Size (Number of Cities)
ncity = 10

# Create distance matrix from random numbers
d = np.triu(np.random.rand(ncity, ncity), k=1)
d += np.triu(d).T  # Convert to symmetric matrix

# Create variable arrays
gen = BinarySymbolGenerator()
q = gen.array(ncity, ncity)  # Generate ncity x ncity variable array

# Formulation of the objective function
qr = q.roll(-1, axis=0)
f = sum_poly("ij,ni,nj->", d, q, qr)  # Einstein summation convention


The formulation of the traveling salesman problem requires more than just an objective function. The following constraints must be imposed on the variable array to represent the condition that each city is visited exactly once.

\begin{align}\begin{aligned}\sum_{i=0}^{N-1}{q_{n, i}} = 1 \quad n \in \left\{0, 1, \cdots, N - 1 \right\}\\\sum_{n=0}^{N-1}{q_{n, i}} = 1 \quad i \in \left\{0, 1, \cdots, N - 1 \right\}\end{aligned}\end{align}

The above constraint conditions state that in each row and column of the two-dimensional variable array, there must be exactly one variable that is $$1$$. These are expressed using the equal_to() function as follows.

from amplify.constraint import equal_to

g = sum(equal_to(q[i, :].sum(), 1) + equal_to(q[:, i].sum(), 1) for i in range(ncity))


The sum of the variables of each row and column is calculated by sum(), and then the constraint for each low and column is constructed by the equal_to() function. These constraint objects can be added together by the sum() function to give the constraint object $$g$$ which reflects all the constraints. Note that the sum() method in the equal_to() function can be omitted, since the constraint function can take polynomial array and call sum() implicitly.

Moreover, as a special case of the equal_to() function, there is a one_hot() function for one-hot constraints. This function takes a polynomial or a polynomial array for its only argument. Giving a polynomial (or polynomial array) to this function is equivalent to feeding the equal_to() function with the same polynomial (or polynomial array) as the first argument and $$1$$ as the second argument.

Therefore, the following simplification is achieved:

from amplify.constraint import one_hot

g = sum(one_hot(q[i]) + one_hot(q[:, i]) for i in range(ncity))


Next, we determine the weights of the constraint objects. The constraint objects generated by default are normalized to impose a penalty value so that the minimum penalty for not satisfying the constraint will be $$1$$. This may be under- or over-weighted depending on the objective function. If the weights are too small, the optimized result will be less likely to satisfy the constraints, and if they are too large, the objective function may result in a poorly minimized solution.

While it is generally difficult to determine the appropriate constraint weights in advance, the following guideline can be helpful:

(The maximum gain of the objective function when a constraint is violated) $$<$$ (The minimum penalty value when that constraint is violated)

It is difficult to accurately estimate the left-hand side of the above guideline in advance because the (quasi-)optimal solution output by the machine cannot be known in advance. Therefore, we will roughly estimate the maximum value of the left-hand side to determine the weight of the penalty.

In the traveling salesman problem, the gain from violating constraints is estimated as follows: first, suppose we have a feasible solution that satisfies all constraints. We consider the maximum gain in the objective function by modifying that solution. For example, turning one variable $$1$$ to $$0$$ can be interpreted as removing two transfers $$i \rightarrow j$$ and $$j \rightarrow k$$ for some visiting order $$i \rightarrow j \rightarrow k$$. Such a change is penalized because two constraint conditions are violated. On the other hand, the gain of the objective function is at least twice the maximum length of the edges $$\max_{(i,j)} d_{ij}$$. By considering other cases similarly, we can see the penalty weight below is sufficient:

g *= np.max(d)


The weight of a constraint object can be changed by multiplying it by a number. See Set Constraint Weights for more details. Together with the formulation of the objective function, the formulation of the traveling salesman problem can be summarized as follows:

import numpy as np
from amplify import BinarySymbolGenerator, sum_poly
from amplify.constraint import one_hot

# Problem Size (Number of Cities)
ncity = 10

# Create distance matrix
d = np.triu(np.random.rand(ncity, ncity), k=1)
d += np.triu(d).T  # Convert to symmetric matrix

# Create variable arrays
gen = BinarySymbolGenerator()
q = gen.array(ncity, ncity)  # Generate ncity x ncity variable array

# Formulation of the objective function
qr = q.roll(-1, axis=0)  # Displace q by -1 in the column direction (axis=0)
f = sum_poly("ij,ni,nj->", d, q, qr)  # sum_poly の代わりに einsum を用いても同様

# Formulation of the constraints
g = np.max(d) * sum(one_hot(q[i]) + one_hot(q[:, i]) for i in range(ncity))

# Logical model construction
m = f + g


Setting weights for constraint objects is more important for larger problems. Trial-and-error adjustments to the weights may be necessary to obtain a result that satisfies the constraints or to obtain a better solution. See below for more information on improving solution accuracy by estimating and adjusting constraint weights discussed above.

K. Takehara, D. Oku, Y. Matsuda, S. Tanaka and N. Togawa, “A Multiple Coefficients Trial Method to Solve Combinatorial Optimization Problems for Simulated-annealing-based Ising Machines,” 2019 IEEE 9th International Conference on Consumer Electronics (ICCE-Berlin), Berlin, Germany, 2019, pp. 64-69, doi: 10.1109/ICCE-Berlin47944.2019.8966167.

## Inequality Constraints and Integer Encoding¶

The inequality constraint functions, namely, greater_equal(), less_equal(), and clamp(), generate penalty functions $$g$$ similarly to that of equality constraint functions.

First, the integer range $$\left[c_1, c_2 \right]$$ is determined from the possible ranges of the constrained polynomial $$f$$ and the given constraint range so that $$c_1$$, $$c_2$$ are the lower and upper bound $$f$$ can take, respectively. Then a penalty function $$g$$ is constructed in two ways, depending on the value $$c_2 - c_1$$.

1. If $$c_2 - c_1 > 1$$

A penalty function $$g$$ is created using a polynomial $$h$$ whose value range exactly matches the integer range $$\left[c_1, c_2 \right]$$.

$g = \left( f - h \right)^2$

The penalty function $$g$$ takes $$g = 0$$ if $$f = h$$, i.e. $$c_1 \le f \le c_2$$, and $$g > 0$$ otherwise.

2. If $$c_2 - c_1 = 1$$

If $$c_1$$ and $$c_2$$ are adjacent integers, create a penalty function $$g$$ as follows:

$g = \left( f - c_1 \right)\left( f - c_2 \right)$

The penalty function $$g$$ takes $$g = 0$$ if $$f = c_1$$ or $$f = c_2$$ and $$g > 0$$ otherwise.

The former case $$c_2 - c_1 > 1$$ requires an integer encoding to construct a polynomial $$h$$ over binary (or Ising) variables that can take all integer values in $$\left[c_1, c_2 \right]$$. The following methods can be specified as integer encodings for creating polynomial $$h$$.

Method

Polynomial

Number of auxiliary variables

Maximum absolute value of coefficient

Enumerated type name

Note

Unary

$$c_1 + \sum_{i=0}^{c_2 - c_1 - 1}{q_i}$$

$$c_2 - c_1$$

1

Unary

Binary

$$c_1 + \sum_{i=0}^{\lfloor \log_2 \left( c_2 - c_1 + 1 \right) \rfloor - 1}{2^i q_i} + \cdots$$

$$O \left( \log_2 \left( c_2 - c_1 + 1 \right) \right)$$

$$O \left( c_2 - c_1 \right)$$

Binary

not implemented in Ising

Linear

$$c_1 + \sum_{i=1}^{d := \lfloor \sqrt{c_2 - c_1} \rfloor}{ i q_i } + \sum_{i=1}^{d - 1}{ i q_{i+d} } + \cdots$$

$$O \left( \sqrt{c_2 - c_1} \right)$$

$$O \left( \sqrt{c_2 - c_1} \right)$$

Linear

not implemented in Ising

Methods other than the Unary method may leave a fraction $$r$$ in the above formulation. In such cases, the Binary method adds a term $$r q_x$$ for the fraction $$r$$, and the Linear method adds two terms with the half value of the fraction $$r$$ as their coefficients (if $$r$$ is odd, terms with $$\left( r + 1 \right)/2$$ and $$\left( r - 1 \right)/2$$ will be added).

You can specify a method by the keyword argument method in inequality constraint functions, namely, the greater_equal(), less_equal(), and clamp() functions. Integer encoding requires auxiliary variables depending on the integer range $$c_2 - c_1$$. The Amplify SDK automatically manages the auxiliary variables, but you may need to keep in mind how many auxiliary variables you will need and how large the absolute value of the coefficients in the penalty function will be. A large number of auxiliary variables or large absolute values of the coefficients (compared to those of the objective function) may affect the accuracy of the solution.

The following is an example of constructing an inequality constraint object using the Unary method.

from amplify import BinarySymbolGenerator, IntegerEncodingMethod
from amplify.constraint import less_equal

gen = BinarySymbolGenerator() # Declare a BinaryPoly variable generator
q = gen.array(5)  # Generate variable array of length 5
f = q[0] + 2 * q[1] + 3 * q[3] + 4 * q[4] + 5 * q[5]

>>> less_equal(f, 10, method=IntegerEncodingMethod.Unary)
q_0 + 2 q_1 + 3 q_2 + 4 q_3 + 5 q_4 <= 10


If an encoding method is not specified or set to Default, the encoding method is chosen as follows:

1. Choose the method that uses the least number of auxiliary variables out of the Unary method, the Linear method, and the Binary method.

2. Choose the method in the priority order of the Unary method, the Linear method, the Binary method, if the numbers of the required auxiliary variables are the same.

### Formulation of the Knapsack Problem¶

We treat the formulation of the knapsack problem as an example of using inequality constraints. To solve a knapsack problem, we need to find the set of items with the greatest total value such that the total weight does not exceed $$W$$, given several items with weight and value.

Suppose that the weight, value of four items and weight limit are given as follows;

w = [55, 44, 26, 30]  # Weight
v = [50, 36, 24, 33]  # Value
maxW = 100  # Maximum total weight


Create an objective function to maximize the sum of values by generating an array of binary variables representing whether or not to select an item.

from amplify import BinarySymbolGenerator

gen = BinarySymbolGenerator()  # Declare a BinaryPoly variable generator
q = gen.array(4)  # Generate variable array of length 4

# Objective function (reversing the sign to make it a minimization problem)
f = -(v * q).sum()


We need to set a constraint so that the total weight does not exceed maxW.

from amplify.constraint import less_equal

# Constraints if weight is maxW or less
g = less_equal((w * q).sum(), maxW)

from amplify import BinarySymbolGenerator, Solver
from amplify.constraint import less_equal
from amplify.client import FixstarsClient

# Maximize value
f = -(v * q).sum()

# Constraints if weight is less than 100
maxW = 100
g = less_equal((w * q).sum(), maxW)

# Model
m = f + g

client = FixstarsClient()
client.parameters.timeout = 1000

solver = Solver(client)
result = solver.solve(m)


The results can be shown in the following way:

>>> print(f"Total value: {-result[0].energy}")
>>> print(f"Total weight: {(w * q.decode(result[0].values)).sum()}")
Total value: 93.0
Total weight: 100.0


## Constraints Using Penalty Functions¶

If the constraint you wish to construct is outside the scope of the helper functions, or if you wish to set your own penalty function, you can create your own constraint object using the penalty() function.

As an example of creating a constraint object directly from a penalty function, consider the constraint that two variables $$q_0, q_1$$ cannot both be $$1$$ at the same time. The penalty function that achieves this is given by below:

$g = q_0 q_1$

The penalty function $$g$$ is penalized to $$g = 1$$ only if $$q_0 = q_1 = 1$$. Otherwise, $$g = 0$$.

Create a constraint object using the penalty() function as follows:

from amplify import BinarySymbolGenerator
from amplify.constraint import penalty

gen = BinarySymbolGenerator()  # Declare a BinaryPoly variable generator
q = gen.array(2)  # Generate variable array of length 2
g = penalty(q[0] * q[1])  # Set penalty function q[0] * q[1]

>>> g
q_0 q_1 == 0


As before, you can check the results by constructing a logical model from the constraint objects you have created and solving it with a solver object.

from amplify import BinaryQuadraticModel, Solver
from amplify.client import FixstarsClient

client = FixstarsClient()
client.token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
client.parameters.timeout = 1000
client.parameters.outputs.duplicate = True  # Duplicate output of solutions with identical energy values
client.parameters.outputs.num_outputs = 0

solver = Solver(client)
result = solver.solve(model)

>>> for solution in result:
>>>     values = solution.values
>>>     energy = solution.energy
>>>     print(f"q = {q.decode(values)}, energy = {energy}")
q = [0. 0.], energy = 0.0
q = [0. 1.], energy = 0.0
q = [1. 0.], energy = 0.0


We can see variable pairs other than $$q_0 = q_1 = 1$$ have been obtained. Since we constructed the model from the constraint only, the energy value of the feasible solution is $$0$$.

When giving a penalty function $$g$$ to the penalty() function to construct a constraint, $$g$$ should always take $$g \ge 0$$ for all variable combinations, and the constraint is considered to be satisfied if $$g = 0$$. It is possible to change this to an equality or inequality with any value.

Suppose that you have a function $$g$$ whose minimum value is $$m$$. To give a penalty when $$g$$ takes a value $$g > m$$, give $$m$$ as a keyword argument eq.

from amplify import BinarySymbolGenerator
from amplify.constraint import penalty

gen = BinarySymbolGenerator()
q = gen.array(3) # Generate variable array of length 3
g = q[0] * q[1] + q[1] * q[2] + q[2] * q[0] + 1 # Penalty function g (min 1)
p = penalty(g, eq = 1) # Make sufficiency determination at g = 1

>>> p
q_0 q_1 + q_0 q_2 + q_1 q_2 + 1 == 1


Suppose that you have a function $$g$$. You can give a penalty when $$g$$ is less than or equal to some value $$m$$, by giving $$m$$ to a keyword argument le, no matter whether the minimum value of $$g$$ equals to $$m$$.

Such constraint object can be constructed as follows:

from amplify import BinarySymbolGenerator
from amplify.constraint import penalty

gen = BinarySymbolGenerator()
q = gen.array(3) # Generate variable array of length 3
f = -q[0] - 0.5 * q[1] - 0.5 * q[2]         # Objective function f
g = q[0] * q[1] + q[1] * q[2] + q[2] * q[0] # Penalty function g (min 0)
p = penalty(g, le = 1)    # Fulfillment determination with g <= 1

>>> p
q_0 q_1 + q_0 q_2 + q_1 q_2 <= 1

>>> result = solver.solve(f + p)
>>> for sol in result:
>>>     print(f"q = {q.decode(sol.values)}, f = {sol.energy}, g = {g.replace_all(sol.values)}")
q = [1, 1, 0], f = -1.5, g = 1.0
q = [1, 0, 1], f = -1.5, g = 1.0
q = [1, 0, 0], f = -1.0, g = 0.0


We were able to obtain several solutions with $$g \le 1$$.

Similarly, keyword arguments ge (greater equal), lt (less than), or gt (greater than) can be specified to express inequality constraint conditions.

Such constraint object can be used to express a “weaken constraint”, or to formulate an inequality constraint by the Lagrangian relaxation method. In the above example, the penalty function is $$g \ge 0$$, but the constraint is thought to be satisfied if $$g \le 1$$ (instead of $$g = 0$$). So, the above example solves a problem to minimize $$f$$ subject to $$g \le 1$$ with the Lagrangian relaxation method by replacing a constraint $$g \le 1$$ by a linear penalty function $$\lambda \left( g - 1 \right)$$.

If it is difficult to express an inequality constraint directly, a feasible solution may be obtained by giving a penalty function corresponding to the relaxation problem and adjusting the weight $$\lambda$$ appropriately. In the above example, the optimal solution was obtained without changing the weight, but for more complex problems, a possible policy is to search for the optimal solution by changing the weights and solving the problem many times.

Please refer to Set Constraint Weights for information on how to specify weights for the penalty function.

The relaxation method is a powerful tool for inequality constraints on polynomials that can take real numbers or consume a large number of auxiliary variables though they take integers. Although there is no guarantee that an optimal solution to the original problem can be obtained, it can be an option for expressing inequality constraints.

## Operators and Methods of Constraint Classes¶

### Label Constraints¶

Constraint objects can be labeled. This allows filtering of constraints based on labels. See Filter Constraints for a filtering usage example.

You can give a label to a constraint object by feeding the label to the final argument or the keyword argument label of the function that constructs the constraint object.

from amplify import BinarySymbolGenerator
from amplify.constraint import penalty, equal_to

gen = BinarySymbolGenerator()
q = gen.array(2)  # Generate variable array of length 2
g1 = penalty(q, label="penalty_label")
g2 = equal_to(q, 2, label="equal_to_label")

>>> g1
penalty_label: q_0 + q_1 == 0
>>> g2
equal_to_label: q_0 + q_1 == 2


When constraint objects are printed, their labels are displayed besides their constraint expressions. Only the labels can also be displayed.

>>> g1.label
'penalty_label'


### Check if Constraint Conditions are Satisfied¶

To check whether the constraint is satisfied with a certain variable set, feed the variable values to the constraint object. The variable values can be given as a dictionary, list, or function object, but it must be resolvable for all variable indices in the constraint.

from amplify import BinarySymbolGenerator
from amplify.constraint import one_hot

gen = BinarySymbolGenerator()
q = gen.array(4)  # Generate variable array of length 4
f = one_hot(q)

>>> f.is_satisfied({0: 1, 1: 0, 2: 0, 3: 0})
True
>>> f.is_satisfied([0, 1, 0, 0])
True
>>> f.is_satisfied(lambda i: 1 if i == 0 else 0)
True

>>> f.is_satisfied({0: 1, 1: 0, 2: 0, 3: 1})
False
>>> f.is_satisfied([1, 1, 1, 1])
False
>>> f.is_satisfied(lambda i: 0)
False


### Obtain Penalty Functions¶

You can obtain the penalty function, which is an internal expression of a constraint object, as follows:

from amplify import BinarySymbolGenerator, BinaryQuadraticModel
from amplify.constraint import one_hot, less_equal

gen = BinarySymbolGenerator()
q = gen.array(4)  # Generate variable array of length 4
f1 = one_hot(q)
f2 = less_equal(q, 2)

>>> f1
q_0 + q_1 + q_2 + q_3 == 1
>>> f1.penalty
2 q_0 q_1 + 2 q_0 q_2 + 2 q_0 q_3 + 2 q_1 q_2 + 2 q_1 q_3 + 2 q_2 q_3 - q_0 - q_1 - q_2 - q_3 + 1


This way of obtaining penalty functions does not work if auxiliary variables are required to construct the quadratic penalty functions. Such cases occur when a penalty function of higher degree or an inequality constraint is given. This is because the indices of the auxiliary variables are not yet determined.

>>> f2
q_0 + q_1 + q_2 + q_3 <= 2
>>> f2.penalty
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Function to publish ancillary variables is not given


In such cases, convert the constraint to a logical model such as BinaryQuadraticModel first, and the penalty function of the corresponding constraint can be checked. Note that there may be a variable conversion between the indices of the input variables and the logical variables since a conversion from input model to logical model is made.

>>> m = BinaryQuadraticModel(f2)
>>> m.input_constraints
[(q_0 + q_1 + q_2 + q_3 <= 2, 1)]
>>> m.input_constraints[0][0].penalty
2 q_0 q_1 + 2 q_0 q_2 + 2 q_0 q_3 - 2 q_0 q_4 - 2 q_0 q_5 + 2 q_1 q_2 + 2 q_1 q_3 - 2 q_1 q_4 - 2 q_1 q_5 + 2 q_2 q_3 - 2 q_2 q_4 - 2 q_2 q_5 - 2 q_3 q_4 - 2 q_3 q_5 + 2 q_4 q_5 + q_0 + q_1 + q_2 + q_3 + q_4 + q_5


### Set Constraint Weights¶

You can set weights for the constraints. Penalty functions, multiplied by the corresponding weights, is added to the objective function when the constraints are given to a logical model such as BinaryQuadraticModel. That means a weight of a constraint is a relative weight to the objective function or other constraint objects.

from amplify import BinarySymbolGenerator
from amplify.constraint import one_hot

gen = BinarySymbolGenerator()
q = gen.array(4)  # Generate variable array of length 4
g = 1 * one_hot(q)

>>> g
(q_0 + q_1 + q_2 + q_3 == 1, 1)


When a weight is set to a constraint object, it is represented as a pair of the constraint and the weight (BinaryConstraintTerm). The value of the weight can be changed later, but the constraint condition itself cannot be changed.

>>> g[1] = 2
>>> g
(q_0 + q_1 + q_2 + q_3 == 1, 2)


### Constraints Class¶

It is possible to put multiple constraint objects together as a BinaryConstraints class.

from amplify import BinarySymbolGenerator
from amplify.constraint import one_hot

gen = BinarySymbolGenerator()
q = gen.array(2, 2)  # Generate 2x2 variable array
g1 = one_hot(q[0])
g2 = 2 * one_hot(q[1])
g = g1 + g2

>>> g
[(q_0 + q_1 == 1, 1), (q_2 + q_3 == 1, 2)]


Constraints can be put together as a BinaryConstraints class regardless of whether they are weighted or not. Unweighted constraint object will be given a default value of 1.

The weights can be multiplied or divided across the entire constraints object as follows:

>>> g *= 2
>>> g
[(q_0 + q_1 == 1, 2), (q_2 + q_3 == 1, 4)]


Please refer to One-hot constraint formulation for examples of bundling constraints and setting weights.

### Multiplication and Division Operators¶

Multiplication and division over a constraint(s) object changes the value of the constraint weight.

The relation between multiplicand/dividend and return values is as follows: