Space is reusable and time isn’t. This powerful invariant enables a much leaner space complexity with respect to time, which we encounter in space complexity theory as well as in practice.
The implication of Savitch’s theorem demonstrates that NP ⊆ PSPACE, or rather that any problem requiring a nondeterministic Turing Machine (or a quantum or parallel model) to solve time efficiently, requires only polynomial space to compute on any deterministic computational model. In practice, the space requirement can be as modest as linear in presence of even an exponential time solution. The unrestricted boolean formula satisfiability problem, for example, contains no known efficient solution, so far requiring exponential time to explore all truth assignments. And yet we need only store one truth assignment in memory at a time, enabling efficient space utilization.
Generally, any depth-first exploration carries the space requirement of the (presumably known) deepest tree branch depth times the branching factor to hold the siblings of any node in memory. The space to explore the other tree branches is entirely reusable. For a balanced tree, containing relatively consistent branch depths and branching factors, the tree depth is logarithmic with respect to the number of tree nodes, irrespective of the branching factor.
In algorithm design featuring many recursions, only one recursion per level occupies memory. The rest reuse the space occupied by the previous, resulting in a linear space consumption with respect to the number of recursion levels (tree depth).
The above regretfully doesn’t hold for a breadth-first exploration, requiring the worst-case storage of the entire search tree in memory. The observation doesn’t compromise the Savitch’s theorem corollary of NP ⊆ PSPACE, as no nondeterministic TM can ‘shortcut’ the breadth-first search. In fact, any problem requiring a breadth-first solution with an unrestricted number of tree levels belongs to the EXPTIME class, and PSPACE ⊆ EXPTIME .
As we browse the internet, we need not store the entirety of it in our local memory. We need not even store the entire content of all pages we browse, but rather only that which the browser deems worthy of caching for quicker access to frequently or most-recently-used data. We really need only store the content of one page at a time in presence of tighter memory constraints. At an extreme, we could effectively stream navigable content, storing locally only a fraction of a page and reusing the space to load more as we proceed.
The immediate implication of the above is the trade-off between space and time. The more we locally store, the less frequently we must download more content, and the more efficient the time complexity of not only exploring more data but synthesizing content we’ve already processed. We encounter this frequently enough in theory and in practice among hardware and software components. For example, a serial input/output unit (ex: serial adder, serial shift register, or serial transfer of any sort) economizes circuitry at the expense of time. It processes one bit of information at a time in lieu of processing multiple bits in parallel.
Without exploring the space/time trade-off at greater depth, I’ll note that not all problems lend themselves to space/time decomposition beyond certain parameters. Task decomposition and dependency is a separate unit of study in not only parallel computation but general workflows and assembly lines.
Embedded processors tend to simplify the processor datapath in presence of tight space and cost constraints, by minimizing the number of pipeline stages, effectively reducing instruction-level decomposition and parallelization. The reduced circuitry sacrifices either time or problem scope. The simplified datapath might address the time expectations of certain problems for which perhaps it’s designed. In a more general-purpose scope, however, the space/cost efficiency carries an inverse relationship to time. Hobbyist embedded processors and kits come to mind (Arduino, Raspberry PI).
Any algorithmic process lends itself to space and time complexity analysis, and perhaps certain space/time decomposition and trade-off. This enables us to vary resources and problem solving strategies depending on problem-domain-specific parameters. Understanding the problem-domain and being able to decompose it into independent parts is fundamental in effective problem resolution.
Questions, comments? Connect.