Fine-grained reduction

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In computational complexity theory, a fine-grained reduction is a transformation from one computational problem to another, used to relate the difficulty of improving the time bounds for the two problems. Intuitively, it provides a method for solving one problem efficiently by using the solution to the other problem as a subroutine. If problem A can be solved in time a(n) and problem B can be solved in time b(n), then the existence of an (a,b)-reduction from problem A to problem B implies that any significant speedup for problem B would also lead to a speedup for problem A.

Definition

[edit | edit source]

Let A and B be computational problems, specified as the desired output for each possible input. Let a and b both be time-constructible functions that take an integer argument n and produce an integer result. Usually, a and b are the time bounds for known or naive algorithms for the two problems, and often they are monomials such as n2.[1]

Then A is said to be (a,b)-reducible to B if, for every real number ϵ>0, there exists a real number δ>0 and an algorithm that solves instances of problem A by transforming it into a sequence of instances of problem B, taking time O(a(n)1δ) for the transformation on instances of size n, and producing a sequence of instances whose sizes ni are bounded by ib(ni)1ϵ<a(n)1δ.[1]

An (a,b)-reduction is given by the mapping from ϵ to the pair of an algorithm and δ.[1]

Speedup implication

[edit | edit source]

Suppose A is (a,b)-reducible to B, and there exists ϵ>0 such that B can be solved in time O(b(n)1ϵ). Then, with these assumptions, there also exists δ>0 such that A can be solved in time O(a(n)1δ). Namely, let δ be the value given by the (a,b)-reduction, and solve A by applying the transformation of the reduction and using the fast algorithm for B for each resulting subproblem.[1]

Equivalently, if A cannot be solved in time significantly faster than a(n), then B cannot be solved in time significantly faster than b(n).[1]

History

[edit | edit source]

Fine-grained reductions were defined, in the special case that a and b are equal monomials, by Virginia Vassilevska Williams and Ryan Williams in 2010. They also showed the existence of (n3,n3)-reductions between several problems including all-pairs shortest paths, finding the second-shortest path between two given vertices in a weighted graph, finding negative-weight triangles in weighted graphs, and testing whether a given distance matrix describes a metric space. According to their results, either all of these problems have time bounds with exponents less than three, or none of them do.[2]

The term "fine-grained reduction" comes from later work by Virginia Vassilevska Williams in an invited presentation at the 10th International Symposium on Parameterized and Exact Computation.[1]

Although the original definition of fine-grained reductions involved deterministic algorithms, the corresponding concepts for randomized algorithms and nondeterministic algorithms have also been considered.[3]

References

[edit | edit source]
  1. ^ a b c d e f Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  2. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).. A preliminary version of these results, including the definition of a "subcubic reduction", a special case of a fine-grained reduction, was presented at the 2010 Symposium on Foundations of Computer Science.
  3. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).