Hi, and welcome to this lesson about queuing structures.
In the previous lessons, we’ve looked at the two main algorithms for graph traversal, DFS and BFS, and how to build and use routing tables to navigate their corresponding graphs. In this lesson, we’ll see how these algorithms can be implemented in practice, using queuing structures.
A queuing structure is a memory structure that can be used to manage priorities between tasks.
There are only two operations that can be performed when using a queuing structure. Namely, adding (or pushing) an element, and removing (or, popping) an element.
I’ll now introduce two queuing structures that can be used to manage priorities in different ways.
Stacks / LIFO / Last-In First-Out
A stack, which is known as a Last-In-First-Out, or LIFO for short, is a memory structure for which the last element added in the stack is the first one that will be removed.
Queues / FIFO / First-In First-Out
On the contrary, a queue, which is known as a First-In-First-Out, or FIFO for short, is a memory structure for which the first element added will be the first one to be removed.
A unified algorithm for graph traversal
A queuing structure can be used to manage the priorities with which we examine vertices in a graph when performing graph traversal.
By repeating this principle until the queuing structure is empty, we obtain a DFS if we choose to use a LIFO, and BFS when we use a FIFO.
# This unified algorithm takes as arguments a graph, and a vertex of this graph from which to start the graph traversal using a queuing structure. The algorithm returns the list of explored_vertices, and a routing table to navigate through the graph.
def traversal (start_vertex, graph) :
# First we create either a LIFO or a FIFO
queuing_structure = new_queuing_structure()
# Add the starting vertex with None as parent
queuing_structure.push((start_vertex, None))
# Initialize the outputs
explored_vertices = []
routing_table = {}
# Iterate while some vertices remain
while len(queuing_structure) > 0 :
# This will return the next vertex to be examined, and the choice of queuing structure will change the resulting order
(current_vertex, parent) = queuing_structure.pop()
# Tests whether the current vertex is in the list of explored vertices
if current_vertex not in explored_vertices :
# Mark the current_vertex as explored
explored_vertices.append(current_vertex)
# Update the routing table accordingly
routing_table[current_vertex] = parent
# Examine neighbors of the current vertex
for neighbor in neighbors(graph, current_vertex) :
# We push all unexplored neighbors to the queue
if neighbor not in explored_vertices :
queuing_structure.push((neighbor, current_vertex))
return explored_vertices, routing_table
Let’s illustrate this using an example.
Application with DFS
The queuing structure is now empty, so the algorithm finishes. Notice that the fact that we’re using a LIFO directly corresponds to the fact that we keep exploring the graph until the search reaches a vertex without unexplored neighbors, in which case we go back to previously explored vertices.
Application with BFS
The queuing structure is now empty, so the algorithm finishes. We’ve now successfully traversed the graph using BFS.
Concluding words
So, that’s how you can use queuing structures to implement DFS and BFS.
Thanks for watching! I hope you enjoyed learning about queuing structures. That’s the end of this week’s lessons. Next week, we’ll see how to deal with weighted graphs.
FIFOs and LIFOs in computer science
To appreciate the importance of these two data structures in computer science, we present two examples of real life situations in which they are used.
FIFOs in communication systems
For two people to communicate, they must send messages to each other; and for two people to understand each other, they must send messages to each other in a logical order. You will find it easier to understand the sentence A computer lets you make more mistakes faster than any other invention with the possible exceptions of handguns and Tequila than Make and tequila you than a faster the possible any exceptions handguns of invention computer lets more with other mistakes.
For machines, it’s the same: order is important, and must be taken into account. Some communication models (others are much more complex) use queues for this purpose. Imagine a model in which we have a computer that writes words in a queue, and a computer that reads them. If computer sends the words in order, then will read them in the same order (because the first element inserted in a queue is also the first to leave it), and communication will be possible.
You may be wondering why go through a FIFO and not send the information directly? The advantage is that computers and do not necessarily go at the same speed (quality of the processor, network card…). Thus, if computer sends messages to computer very quickly (directly), the latter will miss some of them. Going through a queue allows you to store messages, in order, to avoid these synchronization problems.
LIFOs in programming languages
When an operating system runs a program, it allocates some memory to it so that it can do its calculations correctly. This memory is generally divided into two areas: the heap that stores data, and the stack that allows (among other things) to call functions in programs.
In most programming languages, when a function f(a, b) is called at a code location, the following (simplified) sequence occurs:
The return address is pushed on the stack. This will allow coming back right after f is called once its execution ends.
Argument a is pushed on the stack.
Argument b is pushed on the stack.
The processor is told to process the code of f.
We pop an element from the stack to get b.
We pop an element from the stack to get a.
The code of f is executed.
Once f is over, we pop an element from the stack to get the return address and go back to when f was called.
The processor is told to jump to that address.
But why use a stack instead of variables to pass this information? Well, the processor doesn’t know the details of f! It is possible that f uses another function g for example. If variables were used, the return address of f would be overwritten by the return address of g, and when g was finished, we could never return to the time of the call of f.
How about a FIFO then? Here again, the problem occurs if f calls a function g. As the return address of f is put before that of g in the queue, when it is necessary to return from the call to g, the processor will get instead the return address of f (the oldest in the queue)! We will therefore never execute the f code located after the g call.
To go further
Queuing theory gives more results on queues (expected time before being popped, etc.) and has many applications.