On this page, we’re going to talk about solving NP-complete problems with two algorithms, an exhaustive one and backtracking. At the end of the video, you should be able to implement these two algorithms to solve the traveling salesman problem.
Computing a solution
Classicaly, NP-complete problems are tackled either through:
- exact algorithms, which are reasonably fast for small problems,
- or, suboptimal or heuristic algorithms, which deliver good-enough solutions, but which are not necessarily optimal.
Today, let’s explore two exact algorithms to solve the traveling salesman problem, or TSP.
Brute force algorithm
The most direct solution to the TSP is to try all possible elementary paths, or, permutations of the cities, and see which one is the cheapest. This is known as a brute force, or exhaustive search.
If the starting city is fixed, the number of such permutations equals . Consequently, the calculation time, or the time it takes for the computer to solve the problem with this approach, lies within a polynomial factor of .
To give some more concrete meaning to this complexity, imagine that evaluating one permutation, or one elementary path, takes 1 microsecond. The following table summarizes the calculation time when the number of cities increases.
For example, for a problem with 5 cities, the number of possible elementary paths equals 24, and the calculation time will be 24 microseconds. Multiplying the number of cities by only 3 increases the resolution time to approximately 24 hours, and multiplying the number of cities by 10, thus increasing it to 30, will take around 3 millions of billions of centuries.
So it’s quite obvious that this algorithm becomes impractical even for only 20 cities. Let’s have a look at an example. Consider this graph.
The starting point for the traveling salesman is vertex , and imagine for the sake of simplicity that he does not have to return to in the end. The number of solutions to explore is therefore if is the number of cities.
Trying all possible solutions starting from can be done using a depth-first search, during which we keep in memory the shortest possible path encountered. So far in this MOOC, we considered DFS for which each vertex is explored only once. Here, we are interested in the case where a vertex does not appear twice on the same path, but can appear twice if in different paths. So, here we consider a variation of the DFS where the list of explored vertices is modified for the current explored branch, and not for the other branches.
The tree of all the possible paths is shown here.
The search starts on the left, by proposing the first solution with a cost of 16. This solution is stored in memory as the best solution identified so far. The next explored solution is , which has a cost of 10. This solution is better than the currently known best one, , and so this new solution is stored in memory as the current best solution. The search continues until all possible solutions have been evaluated. The output of the algorithm is solution , which has the lowest possible cost of 7. A total of six solutions have been evaluated in this depth-first search.
The algorithm that runs through the previous tree can be seen here.
best = infinity def bruteforce(remaining, vertex, path, weight, graph): if remaining is empty: if weight < best: best = weight best_path = path else: for each vertex i in remaining: bruteforce(remaining - i, i, path + i, weight + graph[vertex][i],graph)
It is a recursive algorithm that calls itself until a stopping condition is reached. The idea behind the algorithm is to keep in memory the best possible path that has been encountered thus far, and to update it when a better or shorter one is encountered.
The algorithm is then called as follows.
bruteforce(vertexes_List - starting_vertex, starting_vertex, empty_list, 0, graph)
The arguments of the call are as follows:
- the list of vertexes (minus the starting point),
- the initial vertex,
- an empty list, which will contain the optimal path in the end,
- the initial weight of this empty list, which is 0,
- and the graph.
A quick improvement of the previous algorithm is to stop the depth search when it’s certain that the currently explored branch will not generate a better solution than the current best one. This is the case when the total weight of the edges used in the current branch of the tree is already larger than the total weight of the best solution found. This means that the branch that’s explored at each step has an influence on the overall calculation time. For example, imagine that by chance we start with the branch that generates the optimal solution. When exploring the other branches, we will abort exploring them as soon as the total weight exceeds the one found for the first branch, which will save a lot of calculation time.
Various strategies can be used to further optimize backtracking, by preselecting which branches to examine first. As an example, we could select the branch whose starting vertex is the closest to the current vertex.
Let’s illustrate this particular backtracking approach on the same graph again, shown here.
The tree of all possible paths from vertex is shown here.
First, we try to move to its closest neighbor, which is . It costs to move to so our route already has a cost of so far. Then we continue to , which is the closest unexplored neighbor of . It costs 1 more and so the total length is 4. Finally, we move to the last remaining vertex. It adds 8 to our sum and so we obtain a total of 12. We thus have a first estimation of the shortest path, which is even though this path is not optimal.
There is no possible movement from this configuration, so we move backwards.
Again, there are no other alternatives here, as we’ve already explored after , so we continue backwards.
Now we have an alternative option, which is to go to . We add the corresponding weight of 2 and obtain a partial route with a length of 5.
There’s only one option left, which is to visit . However, this move would cost us an additional 8, which would result in a total that’s more than our previously found path with a length 12. So, we move backwards instead.
We’ve now explored all the options starting with then , so we continue backwards. We now select another option when starting at , which is to go to the second nearest neighbor of , which is . From here we go to first. And finally we go to . We obtain a total length of 7, and thus update the shortest route so far.
We go backwards. Twice. Now we have the option to go to , but it would cost too much compared to our best route, so we continue backwards. From there’s only one option left: going to . It would already cost 7, which is as much as our best route, so we cannot expect to find a shortest path in this direction.
So, we can conclude that the shortest route has a length of 7, and yet we’ve only explored a fraction of the trees of all possible paths.
So, the backtracking solution has a much lower complexity than the brute force approach.
The branch-and-bound algorithm
A third algorithm for solving exactly the TSP is the branch-and-bound algorithm.
The branch-and-bound algorithm explores the tree of possibilities (in the case of the traveling salesman, the tree of possible paths). To do this, it uses a Dijkstra algorithm. At each vertex it encounters, it tries to estimate an interval (as narrow as possible) estimating the length of a shorter path obtained from the branch of the tree where it currently is.
To obtain these intervals, there are several techniques.
To obtain an upper bound, it is sufficient to use a greedy algorithm, that estimates the length of the remaining path by always going to the closest vertex. This results in a path that is not necessarily optimal, but which has a length that can only be longer than that of a shorter path.
To obtain a lower bound, we can for example count the number of vertices not yet visited (let’s call this number ). In the best case, they can be visited by going through the edges with the lowest weights. The sum of these weights gives the lower bound. This reduction is often coarse, and it is better to look for the finest.
When the algorithm has determined several intervals (one per branch to explore), it can compare them to each other. In the case of searching for a path of minimum length, if the interval of a branch has as its upper bound a value smaller than the lower bound of an interval of another branch , then the sub-tree corresponding to is eliminated. Indeed, there is no point in exploring the sub-tree corresponding to the interval knowing that the best solution found will in any case be less interesting than the worst ones by continuing on branch .
The complexity of the algorithm is in the worst case the same as an exhaustive search (if the estimation of the limits does not cost too much) this is for example the case with a complete graph in which all the edges have the same weight.
In practice, having good accuracy in estimating intervals can save a lot of time. An optimal case is one of linear complexity.
As in the case of the approximate algorithms that we will study in Episode 5, interval estimation poses the problem of the precision-performance trade-off. Indeed, the narrower the interval, the more it will allow parts of the tree to be pruned. However, if its estimation requires an expensive calculation, it has an impact on total complexity.
It is therefore necessary to find estimates of the bounds of the intervals that offer a good compromise between computation time and accuracy.