Depth vs Breadth First Search in practice

2020-02-05 @Computer Science
By Vitaly Parnas

Depth First Search (DFS) and Breadth First Search (BFS) are common tree search algorithms. The dichotomy between the two also lends itself to one of my all-time favorite computer theoretical concepts applicable in real life.

I’ll first loosely cover the theoretical properties of the two search algorithms. Then I’ll proceed to synthesize them with real-life applications.

Contents

The theoretical groundwork

Each tree level can comprise any number of branches. For simplicity, I demonstrate a binary tree:

         x
        / \
       /   \
      x     \
     / \     x
    x   x   / \
   /       x   x      
  x           / \    
 / \         x   x   
x   x         

Imagine the tree as our search space. In one the tree nodes lies our goal. Let’s assume this location is random and that we have no clue to its whereabouts. Let’s also assume that we have no heuristics to more intelligently guide us along the tree (for which exist specialized algorithms.)

If we conduct a DFS search, starting at the root and proceeding from left to right, we don’t explore the right branch until we’ve exploited the left, in the recursive sense. For the example tree, the search sequence would resemble the following:

         1
        / \
       /   \
      2     \
     / \     8
    3   7   / \
   /       9   10
  4            / \
 / \         11  12
5   6         

A BFS search, on the other hand, explores all the child nodes before proceeding to the grandchildren. In our example, the following would entail the BFS search sequence:

         1
        / \
       /   \
      2     \
     / \     3
    4   5   / \
   /       6   7     
  8           / \  
 / \         9  10 
11  12        

What can we infer from the two searches?

  1. Both effectively demand an equal number of steps to explore the entire tree.
  2. DFS need only store the current branch in memory.
  3. BFS stores the entire tree in memory (for a complete exploration).

Considering a uniformly random probability of any node containing the goal, both search algorithms yield the same time complexity.

However, the space complexity of DFS is significantly more favorable.

DFS explores one branch at a time, to the depth, and as it retracts, it can disregard everything so far traversed. That constitutes a space complexity linear with respect to the tree depth.

BFS explores all the branches simultaneously, which demands the storage of all hitherto traversed nodes and branches in memory, at least while they still have children. The worse-case space complexity is thus exponential with respect to the depth.

Lacking any further intelligence on our search tree, for the sake of economical (linear) space complexity, it makes far more sense to employ DFS, right?

No doubt. The search tree can comprise hundreds, thousands, or even millions of nodes. Each individual node itself occupies space, such as a web page. Storing the full tree in memory would place astronomical memory constraints on whatever machine that executes this search.

But in practice there are caveats. Not with the example we’ve demonstrated under the given constraints, but with the types of trees we encounter in real life.

Many search trees span to maybe not an infinite depth, but a horrendously intractable one. Imagine the world wide web.

First, the WWW is not a tree, since it contains cycles all over (links that point back to far earlier traversed links). But for the sake of search, we project it on to a tree.

That is, whether it’s a search algorithm or a human conducting a search, we don’t blindly and infinitely retrace the same footsteps. We detect and avoid repeating a path.

For practical purposes, we could also consider the problem of searching the WWW as one of unknown depth. What does this imply for DFS?

Unless we restrict the depth of our search, we may never find the solution if we indefinitely descend an erroneous branch. In this case DFS is considered incomplete - it may never yield the search result.

BFS, on the other hand, is guaranteed to find a solution in even a tree of infinite depth, provided the solution lies at some finite level. After all, we systematically explore each level’s expanse.

Alternate, and quiet impactful tree search algorithms exist for mixing DFS and BFS in a way that consumes a tractable search space.

One is the Limited Depth Search (LDS) I’ve alluded to, which we can consider complete as long as the solution lies in the scope of our maximally searched depth.

Another is an Iterative Depth Search (IDS), that conducts incremental LDS searches, each time incrementing the depth as it descends. This may seem grossly inefficient in time complexity, but asymptotically the complexity is comparable to the classic DFS or BFS (the deepest exploration always dominant in the overall runtime).

Let’s keep these hybrid algorithms in the back of our minds, but ultimately focus on the main dichotomy between depth and breadth-oriented searches.

In practice

Many of the complex real life problems project to either a tree search or tree exploration, one of either unknown or fairly intractable depth. To each we apply some variation of a tree search algorithm. I’ll briefly cover a handful of cases.

Internet research

As we conduct internet research, we tend to broaden, opening up additional browser tabs as we proceed.

This description is suggestive of a breadth kind of search; not one of a truly exponential space complexity, since we tend to limit the breadth to a certain factor (number of tabs), varying strategies between descending a few levels, broadening, once again descending, etc.

Asymptotically, this practical browsing mechanism might even yield the same space complexity. Disregard this point if it doesn’t make sense.

However, we tend to notice a familiar effect. The vast number of open tabs become anxiety provoking. The browser tends to slow down and consume the majority of available system RAM, even on modern CPUs.

These artifacts of increased space complexity begin to look ugly and overwhelming.

On the other hand, a DFS kind of browsing experience appeals to no tabs. We follow the rabbit hole, one link after another, never broadening, until we reach a point of satisfaction at having exploited the given branch. Then we regress and exploit another.

With breadth, we reach a point when at a certain depth we don’t wish to abandon the branch, but don’t feel fully confident that continuing may lead us to significant marginal benefit.

Or sometimes it’s a matter of endurance. Starting something afresh feels easier than continuing down a strained path.

So we postpone the branch, keeping the tab open, and broaden.

Single-tabbed DFS browsing may require greater discipline. But the space complexity impact on the cognitive faculty is noteworthy.

Travel, hikes

A typical traveler tends to explore a destination city or country in a Limited Depth Search sort of manner, intent on ‘having seen’ the maximum number of relevant sites, but each explored fairly shallowly.

With breadth-oriented travel, the space complexity impact typically involves not only increased mental bookkeeping (and in my case, fair anxiety), but increased labor.

The more ambitiously the traveler broadens, the more logistical planning, research, and transport arrangements between sites take place. Any transition in the exploration tree that doesn’t follow a direct link, but jumps from a certain depth of one branch to the root of another, results in the stated space complexity taxation.

I personally prefer depth oriented travel precisely as means alleviate much of the space complexity artifacts.

That means one country and one city for a longer duration; minimal planning; no anxiety over maximizing the ‘having seen’ metric. Pick one branch and explore it as lengthily, as profoundly as I care to, following direct links (not jumping branches) deeper into the branch, traversing slowly, mindfully.

I tend to derive more authentic pleasure in such exploration, in contrast to breadth-oriented. The simplified depth-oriented itinerary also results in vastly cheaper travel.

Multitasking

The multitasking may involve any set of activities among learning, skill acquisition, hobbies, goals, etc.

The more we emphasize depth, minimizing the number of parallel branches, the less the context switch cost, the less mental bookkeeping, the less anxiety, the less complex the scheduling, the lower the expenses in general. I tend to refer to these factors simply as high space complexity.

Alternatively, if we broaden, the scattered pursuit tends to give manifest to these cognitive artifacts.

Now I’m not necessarily critical of parallel pursuit. However, I do attempt to minimize specifically those activities for which I hold goals.

Goal-oriented activities particularly exhaust our head space for lack of cognitive closure, as well as for the meta-analytics we perform concerning our performance and progress. These types of activities especially penalize space complexity.

Activities for which I hold no goal in mind, however, tend to drain less resources. I recognize that they carry benefit, but I don’t explicitly aim to assert the benefit or reach any given mark.

I enjoy the mere process, and can generally disperse the activities throughout the day without major concern for exhaustion. For me this includes reading, Jazz listening, meditation, exercise, hikes.

Gaming applications

Whether an AI or a human is engaged in a two (or multi) player game, each move involves a tree search of possibilities of varying depth.

To structure the next move in chess, for example, we may consider among a few viable candidates, for each of which, we then consider the potential opponent responses, and so on, recursively. This forms the foundation of game theory.

For many but the simplest of games (ie chess, not tic-tac-toe), such a decision tree spans to an inconceivably exponential size. Consequently, both humans and AI employ certain heuristics, shortcuts, the discarding of unlikely branches, otherwise giving way to a more intelligent search to maximize the probability of a winning strategy.

To continue to exhaust one branch beyond reason is more likely to compromise our overall strategy, especially due to high stochasticity (unpredictability) involved in the opponent’s decisions.

A typical strategy normally involves a Limited Depth Search with certain heuristics to emphasize the more likely branches.

Now for many players, that depth factor is 1 or 2: consider a few possible moves, just maybe reflect on how the opponent may react, and otherwise appeal to limited intuition. This kind of a player is unlikely to arrive at victory against another that employs greater depth in strategy exploration, abilities of the two otherwise comparable.

While gaming AI and human thinking is greatly more complex than conceivable by a search tree, at a simpler, intuitive level, it normally makes sense to conduct some Limited Depth (or perhaps Iterative Depth) Search, with heuristics, to arrive at a reasonably palatable decision.

Parallel program execution

The instructions of a typical serial computer program execute sequentially on a CPU. Any recursive function call (a function that calls itself) causes the CPU to grow the stack space (the part of memory that stores the execution frames).

A function with two (or more) recursive calls may seem like it would occupy an exponential amount of stack space, the number of frames doubling in size at each tree level.

Fortunately, the CPU more intelligently allocates stack space - depth-first, not breadth-first. It first follows the depth of the whole left-most tree branch (if we imagine DFS execution left->right), then proceeds to the right recursive branches.

Now matters somewhat complicate for parallel program execution. Irrespective of the parallel architecture, any parallel execution exchanges time for space.

A theoretical parallel machine of an infinite number of execution nodes dedicated to (at best) a logarithmic number of sequentially executing time frames, would indeed occupy an exponential amount of space (memory), simultaneously processing all branches.

Don’t worry if this didn’t make sense.

And as a further consequence of (non-theoretical) parallel execution, there’s much additional bookkeeping and latency involves for the parallel nodes to communicate and exchange key bits of the solution.

As a result, parallel programs never proportionally gain in time what they sacrifice in space. This is yet another form of BFS taxation!

Likewise, don’t worry if this didn’t make full sense either.

Merely keep in mind that serial program execution implies DFS, while parallel implies BFS. Time exchanged for space (at a usually steep commission).

Conclusion

The main point to gather from this essay is not only the impact of Breadth First Search on algorithmic space complexity, but the mental faculty impact with any set of activities we undertake broadly.

Emphasis on depth tends to simplify matters for many a domain. Focus on breadth tends to penalize our energy supply.

Questions, comments? Connect.