Example NP problems published in A. Lucas, Front. Phys. (2014)  Minimum feedback edge set problem¶
This example code implements the minimum feedback edge set problem introduced in the paper A. Lucas, "Ising formulations of many NP problems", Front. Phys. (2014) using Fixstars Amplify. Other NPcomplete and NPhard problems introduced in the same paper are also discussed below (the corresponding sections in the paper are shown in the brackets).
 Graph partitioning problem (Sec. 2.2).
 Maximum clique problem (Sec. 2.3)
 Exact cover problem (Sec. 4.1)
 Set packing problem (Sec. 4.2)
 Minimum vertex cover problem (Sec. 4.3)
 Satisfiability problem (SAT) (Sec. 4.4)
 Minimum maximum matching problem (Sec. 4.5)
 Graph coloring problem (Sec. 6.1)
 Clique cover problem (Sec. 6.2)
 Job sequencing problem with integer lengths (Sec. 6.3)
 Hamiltonian cycle problem (Sec. 7.1)
 Directed feedback vertex set problem (Sec. 8.3)
 Minimum feedback edge set problem (Sec. 8.5)
 Graph isomorphism problem (Sec. 9)
Minimum feedback edge set problem¶
Given a directed graph $G$, a subset $F$ of the edges of $G$ such that any closed path in $G$ contains at least $1$ edges in $F$ is called a feedback edge set. In other words, $F$ is a feedback edge set of $G$ if there is no closed path through only edges not contained in $F$ of $G$.
For example, in the graph below, the edges shown in orange are the feedback edge set.
Minimum feedback edge set problem is the problem of finding, for a directed graph $G$, the feedback edge set of $G$, which has the minimum number of elements.
Here, we use Fixstars Amplify to solve the minimum feedback edge set problem. The formulation of this sample program follows that of Sec. 8.5 of A. Lucas, Front. Phys. (2014).
Problem definition¶
First, we create a directed graph $G$ using NetworkX as the minimum return edge set problem we solve in this example program.
import networkx as nx
N = 8 # Number of vertices
G = nx.DiGraph()
G.add_nodes_from(range(N))
edge_list = [
(0, 1),
(0, 6),
(1, 2),
(2, 3),
(3, 1),
(3, 4),
(3, 5),
(4, 5),
(5, 6),
(6, 7),
(7, 0),
]
G.add_edges_from(edge_list)
node_labels = {
0: "a",
1: "b",
2: "c",
3: "d",
4: "e",
5: "f",
6: "g",
7: "h",
}
pos = nx.circular_layout(G) # Save the layout of the graph
nx.draw_networkx(G, node_size=600, font_color="w", labels=node_labels, pos=pos)
It is easy to see that the graph $G$, with edges $b\rightarrow c$ and $a\rightarrow g$ removed from the created graph $G$, has no closed path. That is, $\{b\rightarrow c$, $a\rightarrow g\}$ is a feedback edge set of $G$.
Also, since the closed paths $b\rightarrow c\rightarrow d\rightarrow b$ and $a\rightarrow g\rightarrow h\rightarrow a$ have no common part, the number of elements in the feedback edge set of $G$ is always at least $2$.
Therefore, for this problem, the minimum number of elements in the feedback edge set of $G$ is $2$.
Formulation¶
Let $N$ be the number of vertices and $M$ the number of edges of $G$ hereafter.
Formulation guidelines¶
First, we will associate $M$ binary variables $y$ with each edge to indicate whether the feedback edge set $F$ includes each edge. If $F$ includes the edge, the variable is $0$, and if not, the variable is $1$.
Next, the condition "there is no closed path through only those edges of $G$ that are not included in $F$", can be rephrased as the condition that "if we number the vertices of $G$ appropriately, we can make it so that all edges not contained in $F$ of $G$ go from a vertex with a small number to the one with a larger number (proof: $\Rightarrow$ is straightforward, $\Leftarrow$ is via topological sorting).
This numbering can be expressed by using a binary variable table $x$ of $N\times N$, where the binary variable in row $v$ and column $i$ is $1$ when the number of vertex $v$ is $i$.
The graph created above is used to illustrate an example of variable mapping. The graph created above is shown below.
For this graph, if we number the vertices as follows, the two orange edges are the feedback edge set (because the edges are in the direction of decreasing numbers), and all black edges are from a smallernumbered vertex to a largernumbered one.
The binary variables $y$ and $x$ corresponding to this way of choosing the feedback edge set and numbering the vertices are shown in the following table.
Edge  $$a\rightarrow b$$  $$a\rightarrow g$$  $$b\rightarrow c$$  $$c\rightarrow d$$  $$d\rightarrow b$$  $$d\rightarrow e$$  $$d\rightarrow f$$  $$e\rightarrow f$$  $$f\rightarrow g$$  $$g\rightarrow h$$  $$h\rightarrow a$$ 

$y$  1  0  0  1  1  1  1  1  1  1  1 
$x$  0  1  2  3  4  5  6  7 

$a$  0  0  0  0  0  0  1  0 
$b$  0  0  0  0  0  0  0  1 
$c$  1  0  0  0  0  0  0  0 
$d$  0  1  0  0  0  0  0  0 
$e$  0  0  1  0  0  0  0  0 
$f$  0  0  0  1  0  0  0  0 
$g$  0  0  0  0  1  0  0  0 
$h$  0  0  0  0  0  1  0  0 
Objective function¶
Since the number of elements in the feedback edge set should be as small as possible, the objective function is $\displaystyle \sum_{e=0}^{M1} y_e$ where $y_e$ is the number of elements in the feedback edge set. Note that $y_e$ is $0$ if the edge $e$ is included in the feedback edge set $F$ and $1$ otherwise.
Constraints¶
For $y$ and $x$ to represent the feedback edge set, we need the following.
 Condition 1: Each vertex of $G$ has a single number which is between $0$ and $N$. That is, each row of $x$ has exactly one single $1$.
 Condition 2: For an edge $u\rightarrow v$ of $G$, the number of $u$ is less than the number of $v$ if $u\rightarrow v$ is not in the returned edge set $F$.
Condition 1 is written as
$$ \sum_{i=0}^{N1} x_{v, i} = 1 \quad \text{for} \quad v \in \{0, 1, \ldots, N1\}. $$
Also, since condition 2 can be rephrased as "if the edge $u\rightarrow v$ is not contained in $F$, then for natural numbers $i \leq j$, $x_{u, \underline{j}}$ and $x_{v, \underline{i}}$ must not both be $1$." Therefore, condition 2 is written as
$$ y_{u\rightarrow v} x_{u, j} x_{v, i} = 0 \quad \text{for} \quad (u, v) \in E, \ 0 \leq i \leq j < N. $$
Here, $E$ represents the edge set of $G$ and $y_{u\rightarrow v}$ is the element of $y$ corresponding to the edge $u\rightarrow v$.
The expression for condition 2 is cubic and must be converted to quadratic using auxiliary variables to be solved by the Ising machine. The Fixstars Amplify SDK provides a function to perform this conversion automatically. Therefore, we will describe the following two methods. Method 1: using Fixstars Amplify SDK's polynomialdegree reduction function; and Method 2: manually reducing the condition 2 to a quadratic expression.
Implementation (Method 1: Using Amplify SDK's polynomialdegree reduction function)¶
Using the problem and formulation described above, let us implement and solve the problem. First, use Amplify SDK's function to reduce the polynomial order.
We create binary variables $y$ and $x$ using VariableGenerator
.
from amplify import VariableGenerator
gen = VariableGenerator()
M = len(G.edges) # Number of edges
y = gen.array("Binary", shape=(M,))
x = gen.array("Binary", shape=(N, N)) # N is the number of vertices
Next, we construct the objective function $\displaystyle \sum_e y_e$.
cost = y.sum()
Let us now create a constraint condition corresponding to condition 1. Condition 1 is a onehot constraint on each row of $x$.
from amplify import one_hot
constraint1 = one_hot(x, axis=1)
Then, we construct the constraint corresponding to condition 2. Condition 2 is the constraint $y_{u\rightarrow v} x_{u, j} x_{v, i} = 0 \bigl((u, v) \in E, \ 0 \leq i \leq j < N\bigr)$.
from amplify import equal_to, sum as amplify_sum
constraint2 = amplify_sum(
equal_to(y[e] * x[u, j] * x[v, i], 0)
for e, (u, v) in enumerate(G.edges)
for i in range(N)
for j in range(i, N)
)
Finally, the objective function and constraints are combined and converted into an optimization model.
Since the constraints are given to the Ising machine as a penalty function for the objective function, we can determine the weights for the constraints by estimating values that are approximately equal to or slightly larger than the possible values of the objective function. In this case, the constraint weights are set to $2$.
penalty_multiplier = 2
model = cost + penalty_multiplier * (constraint1 + constraint2)
Let us set the client and execute the solver with Fixstars Amplify Annealing Engine (AE).
To reduce the number of auxiliary variables to be issued when performing the polynomialdegree
reduction,
we specify Substitute
as the
algorithm. This algorithm is useful when many thirdorder or higherorder terms factor a common
quadratic
term $q_iq_j$.
Since Amplify SDK automatically filters the solutions that satisfy the constraints, if the
result
is not empty, you know that a solution has been found that satisfies the
constraints.
from amplify import FixstarsClient, solve
from datetime import timedelta
client = FixstarsClient()
# client.token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # If you use Amplify in a local environment or Google Colaboratory, enter your Amplify API token.
client.parameters.timeout = timedelta(milliseconds=1000) # timeout is 1000 ms
# Solve the problem
result = solve(
model,
client,
quadratization_method="Substitute", # algorithm for polynomial degree reduction
)
if len(result) == 0:
print("No solution has been found.")
else:
print("A solution has been found.")
Finally, we visualize the solution. As we discussed earlier, the minimum number of elements in the feedback edge set of $G$ is $2$, so if there are 2 orange edges, we have found the optimal solution.
import numpy as np
# evaluate the binary variables
values = result.best.values
y_values = y.evaluate(values)
x_values = x.evaluate(values)
# Denote the indices of the edges
numbering = dict(np.argwhere(x_values == 1))
# Identify whether each edge is contained in F
edge_colors = ["C1" if e == 0 else "k" for e in y_values]
edge_width = [2.0 if e == 0 else 1.0 for e in y_values]
# Visualize
nx.draw_networkx(
G,
node_size=600,
font_color="w",
labels=numbering,
edge_color=edge_colors,
width=edge_width,
pos=pos,
)
We have now implemented a program to find the minimum feedback edge set. Next, we will describe how to formulate and solve the same problem without using the order reduction feature of Amplify SDK.
Formulation (Method 2: Making the formulation quadratic)¶
In the above formulation, condition 2 became a cubic expression. Here, we consider expressing condition 2 in a quadratic form by adding a new binary variable.
As mentioned earlier, the condition 2 is the following condition.
For an edge $u\rightarrow v$ of $G$, if $u\rightarrow v$ is not contained in the feedback edge set $F$, then the number of $u$ is less than the number of $v$.
Formulation guidelines¶
For the binary variables $y$ and $x$, assume as defined above.
If we know that the number of vertex $u$ is $i$, then the constraint "the number of $u$ is less than the number of $v$" is expressed by a firstorder expression as:
$$ \sum_{j>i} x_{v, j} = 1. $$
Therefore, if we can obtain the firstorder formula for each edge for whether the feedback edge set $F$ includes the edge. For the number of the starting point, we can express condition 2 by taking these formulae and OR.
We introduce a binary variable table $z$ of $M \times N$ to express whether $F$ contains $M$ and the number of the starting point. Here $M$ is the number of edges in $G$, and $N$ is the number of vertices in $G$. The rows corresponding to $u\rightarrow v$ in $z$ are all $0$ if the edge $u\rightarrow v$ is contained in $F$; otherwise, only the $i$ column is $1$, with $i$ as the number of $u$ (starting point).
For example, for the following feedback edge selection/numbering scheme, $z$ will be as in the table below.
$z$  0  1  2  3  4  5  6  7 

$a\rightarrow b$  0  0  0  0  0  0  1  0 
$a\rightarrow g$  0  0  0  0  0  0  0  0 
$b\rightarrow c$  0  0  0  0  0  0  0  0 
$c\rightarrow d$  1  0  0  0  0  0  0  0 
$d\rightarrow b$  0  1  0  0  0  0  0  0 
$d\rightarrow e$  0  1  0  0  0  0  0  0 
$d\rightarrow f$  0  1  0  0  0  0  0  0 
$e\rightarrow f$  0  0  1  0  0  0  0  0 
$f\rightarrow g$  0  0  0  1  0  0  0  0 
$g\rightarrow h$  0  0  0  0  1  0  0  0 
$h\rightarrow a$  0  0  0  0  0  1  0  0 
Secondorder formulation of the condition 2¶
For condition 2, $z$ must satisfy the following. We write $z_{u\rightarrow v}$ for the $z$ row corresponding to edge $u\rightarrow v$

Condition 21: Each row of $z$ represents whether the corresponding edge is included in the feedback edge set $F$. That is, each row of $z$ is all $0$ if the corresponding edge is part of in $F$, and one colum of the row has $1$ otherwise.

Condition 22: Each row of $z$ represents the starting number of the edge if the feedback edge set $F$ does not include the corresponding edge. That is, $z_{u\rightarrow v, i} = 1$ means that $u\rightarrow v$ is not contained in $F$ and the number of $u$ is $i$.

Condition 23: If edge $u\rightarrow v$ is not contained in $F$ and the number of vertex $u$ is $i$, then the number of vertex $v$ is greater than $i$.
Condition 21 is written as
$$ \sum_{i = 0}^{N  1} z_{e, i} = y_e \quad \text{for} \quad e \in E. $$
Please recall that $y_e$ is a binary variable $0$ if the edge $e$ is contained in $F$.
For conditions 22, it is sufficient to impose the condition that "if $z_{u\rightarrow v, i} = 1$, then the number of $u$ is $i$". This is because it is clear from Condition 21 that "if $z_{u\rightarrow v, i} = 1$, then $u\rightarrow v$ is not contained in $F$". Therefore, condition 22 is satisfied by the following equation.
$$ z_{u\rightarrow v, i}(1  x_{u, i}) = 0 \quad \text{for} \quad (u\rightarrow v) \in E, \ i \in \{0, 1, \ldots, N1\}. $$
From the condition 22, the presumption of condition 23 is equivalent to $z_{u\rightarrow v, i} = 1$. Also, as mentioned earlier, the condition "the number of vertices $v$ is greater than $i$" is expressed by $\sum_{j>i} x_{v, j} = 1$, so condition 23 is:
$$ z_{u\rightarrow v, i} (1  \sum_{j>i} x_{v, j}) = 0 \quad \text{for} \quad (u\rightarrow v) \in E, \ i \in \{0, 1, \ldots, N1\} $$
Condition 21, 22, and 23 can now be formulated in the first and secondorder manners. It is easy to see that constraint 2 is satisfied if all these conditions are satisfied.
Implementation (Method 2: Making the formulation quadratic)¶
Let us solve the minimum feedback edge set problem in the above formulation as well.
Since the definition of the objective function and constraint 1 are the same as in the first formulation, we omit the explanation.
from amplify import VariableGenerator
from amplify import one_hot
gen = VariableGenerator()
M = len(G.edges) # number of edges
y = gen.array("Binary", shape=(M,))
x = gen.array("Binary", shape=(N, N)) # N is number of nodes
cost = y.sum()
constraint1 = one_hot(x, axis=1)
To implement condition 2, we will define a binary variable table $Z$.
z = gen.array("Binary", shape=(M, N))
We will construct condition 21: $\sum_{i = 0}^{N  1} z_{e, i} = y_e \ (e \in E)$.
from amplify import equal_to
constraint2_1 = equal_to(z.sum(axis=1)  y, 0, axis=())
Then we will construct condition 22: $z_{u\rightarrow v, i} (1  x_{u, i}) = 0 \bigl((u\rightarrow v) \in E, \ i \in \{0, 1, \ldots, N1\}\bigr)$.
from amplify import sum
constraint2_2 = sum(
equal_to(z[e, i] * (1  x[u, i]), 0)
for e, (u, v) in enumerate(G.edges)
for i in range(N)
)
Finally, condition 23 is implemented: $z_{u\rightarrow v, i} (1  \sum_{j>i} x_{v, j}) = 0 \ \bigl((u\rightarrow v) \in E, \ i \in \{0, 1, \ldots, N1\}\bigr)$.
Under the condition that condition 1 holds, the lefthand side of conditions 23 takes the minimum
value
0, so the polynomial can be considered to be a penalty function. We can specify a penalty function
manually by passing it to the Constraint
constructor (see
details).
On the other hand, if you use the equal_to
function, the left side is squared
and
becomes a fourthorder expression since the penalty function is generated internally. Therefore, in
this
constraint, we will avoid using the equal_to
.
from amplify import Constraint, sum
constraint2_3 = sum(
Constraint(
z[e, i] * (1  x[v, i + 1 :].sum()),
eq=0,
penalty=z[e, i] * (1  x[v, i + 1 :].sum()),
)
for e, (u, v) in enumerate(G.edges)
for i in range(N)
)
The objective function and constraints are combined to form an optimization model. Since the
potential to
satisfy constraint2_3
may not work if condition 1 is not satisfied, we should increase
the
weight of condition 1.
model = cost + 2 * constraint1 + constraint2_1 + constraint2_2 + constraint2_3
Let us execute the solver on the Fixstars Amplify Annealing Engine (AE) and visualize the results.
from amplify import FixstarsClient, solve
from datetime import timedelta
client = FixstarsClient()
# client.token = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # If you use Amplify in a local environment or Google Colaboratory, enter your Amplify API token.
client.parameters.timeout = timedelta(milliseconds=1000) # timeout is 1000 ms
# Solve the problem
result = solve(model, client)
if len(result) == 0:
print("No solution has been found.")
else:
print("A solution has been found.")
values = result.best.values
y_values = y.evaluate(values)
x_values = x.evaluate(values)
numbering = dict(np.argwhere(x_values == 1))
edge_colors = ["C1" if e == 0 else "k" for e in y_values]
edge_width = [2.0 if e == 0 else 1.0 for e in y_values]
nx.draw_networkx(
G,
node_size=600,
font_color="w",
labels=numbering,
edge_color=edge_colors,
width=edge_width,
pos=pos,
)