In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. The first systematic work on parameterized complexity was done by Downey & Fellows (1999).
Under the assumption that P ≠ NP, there exist many natural problems that require superpolynomial running time when complexity is measured in terms of the input size only, but that are computable in a time that is polynomial in the input size and exponential or worse in a parameter k. Hence, if k is fixed at a small value and the growth of the function over k is relatively small then such problems can still be considered "tractable" despite their traditional classification as "intractable".
The existence of efficient, exact, and deterministic solving algorithms for NP-complete, or otherwise NP-hard, problems is considered unlikely, if input parameters are not fixed; all known solving algorithms for these problems require time that is exponential (or at least superpolynomial) in the total size of the input. However, some problems can be solved by algorithms that are exponential only in the size of a fixed parameter while polynomial in the size of the input. Such an algorithm is called a fixed-parameter tractable (fpt-)algorithm, because the problem can be solved efficiently for small values of the fixed parameter.
Problems in which some parameter k is fixed are called parameterized problems. A parameterized problem that allows for such an fpt-algorithm is said to be a fixed-parameter tractable problem and belongs to the class FPT, and the early name of the theory of parameterized complexity was fixed-parameter tractability.
Many problems have the following form: given an object x and a nonnegative integer k, does x have some property that depends on k? For instance, for the vertex cover problem, the parameter can be the number of vertices in the cover. In many applications, for example when modelling error correction, one can assume the parameter to be "small" compared to the total input size. Then it is challenging to find an algorithm which is exponential only in k, and not in the input size.
In this way, parameterized complexity can be seen as two-dimensional complexity theory. This concept is formalized as follows:
For example, there is an algorithm which solves the vertex cover problem in time,[1] where n is the number of vertices and k is the size of the vertex cover. This means that vertex cover is fixed-parameter tractable with the size of the solution as the parameter.
FPT contains the fixed parameter tractable problems, which are those that can be solved in time for some computable function f. Typically, this function is thought of as single exponential, such as but the definition admits functions that grow even faster. This is essential for a large part of the early history of this class. The crucial part of the definition is to exclude functions of the form , such as . The class FPL (fixed parameter linear) is the class of problems solvable in time for some computable function f.[2] FPL is thus a subclass of FPT.
An example is the satisfiability problem, parameterised by the number of variables. A given formula of size m with k variables can be checked by brute force in time . A vertex cover of size k in a graph of order n can be found in time , so this problem is also in FPT.
An example of a problem that is thought not to be in FPT is graph coloring parameterised by the number of colors. It is known that 3-coloring is NP-hard, and an algorithm for graph k-colouring in time for would run in polynomial time in the size of the input. Thus, if graph coloring parameterised by the number of colors were in FPT, then P = NP.
There are a number of alternative definitions of FPT. For example, the running time requirement can be replaced by . Also, a parameterised problem is in FPT if it has a so-called kernel. Kernelization is a preprocessing technique that reduces the original instance to its "hard kernel", a possibly much smaller instance that is equivalent to the original instance but has a size that is bounded by a function in the parameter.
FPT is closed under a parameterised reduction called fpt-reduction, which simultaneously preserves the instance size and the parameter.
Obviously, FPT contains all polynomial-time computable problems. Moreover, it contains all optimisation problems in NP that allow an efficient polynomial-time approximation scheme (EPTAS).
The W hierarchy is a collection of computational complexity classes. A parameterized problem is in the class W[i], if every instance can be transformed (in fpt-time) to a combinatorial circuit that has weft at most i, such that if and only if there is a satisfying assignment to the inputs that assigns 1 to exactly k inputs. The weft is the largest number of logical units with unbounded fan-in on any path from an input to the output. The total number of logical units on the paths (known as depth) must be limited by a constant that holds for all instances of the problem.
Note that and for all . The classes in the W hierarchy are also closed under fpt-reduction.
Many natural computational problems occupy the lower levels, W[1] and W[2].
Examples of W[1]-complete problems include
Examples of W[2]-complete problems include
can be defined using the family of Weighted Weft-t-Depth-d SAT problems for : is the class of parameterized problems that fpt-reduce to this problem, and .
Here, Weighted Weft-t-Depth-d SAT is the following problem:
It can be shown that for the problem Weighted t-Normalize SAT is complete for under fpt-reductions.[3] Here, Weighted t-Normalize SAT is the following problem:
W[P] is the class of problems that can be decided by a nondeterministic -time Turing machine that makes at most nondeterministic choices in the computation on (a k-restricted Turing machine). Flum & Grohe (2006)
It is known that FPT is contained in W[P], and the inclusion is believed to be strict. However, resolving this issue would imply a solution to the P versus NP problem.
Other connections to unparameterised computational complexity are that FPT equals W[P] if and only if circuit satisfiability can be decided in time , or if and only if there is a computable, nondecreasing, unbounded function f such that all languages recognised by a nondeterministic polynomial-time Turing machine using f(n)log n nondeterministic choices are in P.
W[P] can be loosely thought of as the class of problems where we have a set of items, and we want to find a subset of size such that a certain property holds. We can encode a choice as a list of integers, stored in binary. Since the highest any of these numbers can be is , bits are needed for each number. Therefore total bits are needed to encode a choice. Therefore we can select a subset with nondeterministic choices.
XP is the class of parameterized problems that can be solved in time for some computable function f. These problems are called slicewise polynomial, in the sense that each "slice" of fixed k has a polynomial algorithm, although possibly with a different exponent for each k. Compare this with FPT, which merely allows a different constant prefactor for each value of k. XP contains FPT, and it is known that this containment is strict by diagonalization.
This section needs expansion. You can help by adding to it. (April 2019) |
para-NP is the class of parameterized problems that can be solved by a nondeterministic algorithm in time for some computable function . It is known that if and only if . [4]
A problem is para-NP-hard if it is -hard already for a constant value of the parameter. That is, there is a "slice" of fixed that is -hard. A parameterized problem that is -hard cannot belong to the class , unless . A classic example of a -hard parameterized problem is graph coloring, parameterized by the number of colors, which is already -hard for (see Graph coloring#Computational complexity).
The A hierarchy is a collection of computational complexity classes similar to the W hierarchy. However, while the W hierarchy is a hierarchy contained in NP, the A hierarchy more closely mimics the polynomial-time hierarchy from classical complexity. It is known that A[1] = W[1] holds.
By: Wikipedia.org
Edited: 2021-06-18 18:05:09
Source: Wikipedia.org