Pseudo-polynomial computational complexity

2019-02-27 @Computer Science

What does it mean for an algorithm to execute in pseudo-polynomial time?

Background

If we consider the asymptotic algorithm runtime, we know that the NP complexity class corresponds to problems of no known polynomial (efficient) solution. That is, somewhere in the exponent of the asymptotic complexity function appears the input (encoding) length n, ex: O(2n). Consequently, the problem complexity grows exponentially with n. Even on ultra-modern hardware, such an exponentially growing runtime explodes after a moderately sized input.

The formal definition of NP, which abbreviates Non-deterministic polynomial, is either that of a problem

  1. solvable in polynomial time on a Non-deterministic Turing Machine - a purely theoretical TM model that can execute multiple computational branches simultaneously, or
  2. verifiable in polynomial time given a potential solution.

Both of the above are equivalent, and imply that the problem lacks a known deterministic polynomial solution.

See my introductory computational complexity post if any of this sounds alien.

NP Complete (NPC) problems are interesting. For one, they represent the hardest NP class of problems. Further, finding an efficient (polynomial runtime) solution to any one of them would immediately imply P=NP, or that every other NPC and consequently NP problem has a polynomial solution.

For a more formal background on NPC problems and their relation to the NP class, see my exploratory publication.

Pseudo-polynomial complexity

As I indicated, demonstrating an efficient algorithm to an NPC problem would imply P=NP. Occasionally, I landed on an algorithm that seemed to match this profile, only to realize the solution is not polynomial but pseudo-polynomial. The confusion resulted from my misunderstanding of what is considered reasonable problem encoding length.

While a reasonable encoding can vary between Turing Machine variants used to demonstrate a solution, it generally corresponds in level of conciseness to a physical computer. Also, the computation runtime of any two reasonable encodings will only differ by a polynomial factor [Garey, Johnson], irrespective of the TM variant and alphabet size [Sipser].

The encoding must not pad the input with unnecessary bits, but can otherwise employ any known base greater than 1 (binary, octal, hexadecimal). A physical computer ultimately employs base 2 (bits) to store data, but this is of no relevance in a theoretical TM model, since any two different base representations require but polynomial time to convert between each other, as do any two different TM variants.

It follows that any number x uses only O(log x) bits for the encoding. (The logarithmic base is asymptotically irrelevant.)

Proceeding to the point, a pseudo-polynomial runtime is a function of not only the encoding length of input x, but the magnitude of x itself. And since the encoding length of x (which we’ll call n) comprises n = O(log x) bits of input, the respective portion of the solution runtime becomes O(x) = O(2log x) = O(2n), exponential!

Why does this runtime carry a special name, pseudo-polynomial, rather than simply exponential?

First, not all exponential runtimes are strictly speaking equal. Exponential in terms of some parameters might be preferable to others. While the theoretical runtime is exponential in terms of one variable, in practice, the variable might carry an upper bound, in case of an empirically measured quantity for example, resulting in a polynomial runtime in presence of these restricted problem instances.

Pseudo-polynomial gives us valuable information. The solution experiences an exponential runtime only as long as the input magnitude is allowed to exponentially increase. If restricted, we can count on a polynomial runtime.

Let me provide an example of a pseudo-polynomial solution to a NPC problem PARTITION.

PARTITION

Given a finite set A = {a0, a1, …, an-1} and a positive integer size function s(a), can we partition A into two equally sized subsets? That is to say, is there a subset A' of A such that s(A') = s(A - A')?

PARTITION has a dynamic programming solution. Dynamic programming employs (simulates) a bottom-up recursive strategy in such a way that the calculations at a lower stack level write their results to a table. The higher level stack calculations then reference the existing table results without a need to re-execute the recursive calls.

What would a recursive solution to PARTITION look like?

Let B equal the combined sum of A, and let p(i,j) denote a function that returns true when there exists a subset of {a0, a1, …, ai} with a sum of sizes equal to j, for 0 ≤ i ≤ n, 0 ≤ j ≤ B/2.

At the top, we know that p(n-1,B/2) yields true when a subset of A contains the combined size of precisely half the total size, implying a solution to the problem instance. But how shall we recursively define p(i,j)?

We can be sure that p(i-1,j) = T implies p(i,j) = T, being that an even smaller subset sums to j.

Otherwise, p(i,j) is also true if s(ai) ≤ j and p(i-1, j-s(ai)) = T. That is, the current subset contains elements adding to j if a smaller subset contains elements adding to the difference of j and the current element size. Combining this into one statement:

p(i,j) = p(i-1,j) or [s(ai) ≤ j and p(i-1, j-s(ai))]

This being a recursion, we also need a base case:

p(0,j) = T if and only if j = 0 or j = s(a0)).

We can also immediately reject an instance with a total size not divisible by two.

This being a dynamic programming strategy, we proceed not top-down, as in a typical recursion, but bottom up, populating our table row by row, column by column. Our algorithm, written in Python-like pseudocode, would resemble the following:

B = sum(A)
if not (mod(B, 2) = 0):
    return False
Initialize table p
# Populate the initial row (base case)
for j in 0 to B/2:
    p[0,j] = ((j = 0) or j = s(a[0]))
# Populate the remaining rows
for i in 1 to n-1:
    for j in 0 to B/2:
        p[i,j] = p[i-1,j] or (s(a[i]) <= j and p[i-1, j-s(a[i])])
return p[n-1,B/2]

What’s the asymptotic algorithm runtime? Focusing on the most intensive double loop, and assuming O(1) for memory read/write and arithmetic operations: O(nB). Polynomial? Deceptively, no, because B only requires O(log(B)) bits of storage. B is the magnitude of the largest number of the problem instance, in this case, the total sum of set A. And yet since the encoding length is logarithmic with respect to B, the runtime becomes exponential to the encoding length.

However, in presence of a certain restriction on the magnitude, when the set elements measure an empirical quantity such as weight or categorical data, for example, the effective runtime becomes polynomial.

Other problems featuring a similar dynamic-programming pseudo-polynomial solution include KNAPSACK and SUBSET-SUM.

Number problem

PARTITION is known as a number problem, one with no polynomial relation between the input length and magnitude. In other words, the input magnitude knows no theoretical limit polynomially related to the input length.

The implication is that an NPC problem that’s not a number problem can’t be solved by a pseudo-polynomial algorithm unless P=NP. (With a polynomial relation between input size and magnitude, the pseudo-polynomial algorithm effectively becomes polynomial, which can’t exist unless P=NP.)

References

  1. Garey, Michael R. and Johnson, David S. Computers and Intractability; A Guide to the Theory of NP-Completeness. 1979.
  2. Sipser, Michael. Introduction to the Theory of Computation. 2013, 3rd edition.
  3. Computational Complexity: Short Guide
  4. On computational complexity

Questions, comments? Connect.