Аппроксимационные алгоритмы для задач в P

34

Обычно думают о приближенных решениях (с гарантиями) NP-трудных задач. Проводятся ли какие-либо исследования по приближенным задачам, о которых уже известно, что они находятся в P? Это может быть хорошей идеей по нескольким причинам. Вдобавок ко всему, алгоритм аппроксимации может работать с гораздо меньшей сложностью (или даже намного меньшей константой), может использовать меньше места или может быть намного лучше распараллеливаемым.

Кроме того, схемы, обеспечивающие компромисс между временем и точностью (FPTAS и PTAS), могут быть очень привлекательными для задач в P с нижними границами, которые неприемлемы для больших входов.

Три вопроса: есть ли что-то, что я пропускаю, что делает это явно плохой идеей? Проводятся ли исследования по разработке теории этих алгоритмов? Если нет, то, по крайней мере, кто-нибудь знаком с отдельными примерами таких алгоритмов?

aelguindy
источник
8
Вычислительная геометрия (например, сети) и числовая линейная алгебра (например, различные итерационные методы) предоставляют множество примеров алгоритмов аппроксимации для задач, которые тривиально в P, но точные алгоритмы полиномиального времени могут быть чрезмерно дорогими для огромного реального мира наборы данных. ϵ
Юкка Суомела
5
См. Cstheory.stackexchange.com/questions/9998/… .
Цуёси Ито

Ответы:

20

Как указывает Юкка, вычислительная геометрия является богатым источником проблем, которые могут быть решены за полиномиальное время, но мы хотим получить быстрые приближения. Классическим «идеальным» результатом является «LTAS» (схема линейного приближения времени), время выполнения которого будет иметь вид - обычно они получаются путем извлечения постоянной (poly ( 1 /) εO(n+poly(1/ϵ))1/ϵ )) размера ядра из данных, и работает дорогой алгоритм на этом ядре, с гарантией , что точное решение на ядре является приближенным решением на весь вход.

There are a number of tricks, reductions and principles, and Sariel Har-Peled's new book is full of these. I don't think there's a rich complexity theory as such.

Suresh Venkat
источник
I think this is the closest to a "theory" that one could get. I will take a thorough look at the book. Thanks!
aelguindy
15

Non-exhaustive list of recent papers that find approximate solutions for problems in P

1) There is a great amount of work on approximate solutions for linear equations (symmetric diagonally dominant) in nearly linear time O(npolylog(n))

(list of papers) http://cs-www.cs.yale.edu/homes/spielman/precon/precon.html

(In general most iterative solvers for linear equations share the principle of ϵ-approximating the true solution. The same goes for iterative methods that solve more general problems (e.g., some convex/linear programs)).

2) Approximate solutions to min/max st cuts/flows http://people.csail.mit.edu/madry/docs/maxflow.pdf

3) Finding a sparse approximation of the Fourier transform of a signal in sublinear time http://arxiv.org/pdf/1201.2501v1.pdf

4) Finding the approximate principal component of a matrix http://www.stanford.edu/~montanar/RESEARCH/FILEPAP/GossipPCA.pdf

Dimitris
источник
11

I am not aware of a general theory being developed on approximation algorithms for problems in P. I know of a particular problem, though, that is called approximate distance oracles:

Given a weighted undirected graph G=(V,E) with n=|V| nodes and m=|E| edges, a point-to-point query asks for the distance between two nodes s,tV.

There is a three-way trade-off between space, query time and approximation in the distance oracle problem. One can trivially answer each query exactly (approximation = 1) in O(1) time by storing the all-pair distance matrix; or in O(m+nlogn) time by running a shortest path algorithm. For massive graphs, these two solutions may require prohibitively large space (to store the matrix) or query time (to run a shortest path algorithm). Hence, we allow approximation.

For general graphs, the state-of-the-art is the distance oracle of Thorup and Zwick, which for any given approximation k, requires optimal space. It also gives you a nice space-approximation trade-off.

For sparse graphs, a more general space-approximation-time trade-off can be shown.

Rachit
источник
11

We often seek approximate solutions to simple problems like finding shortest path in a graph, finding number of unique elements in a set. The constraint here is that the input is large and we want to solve the problem approximately using a single pass over the data. There are several "streaming" algorithms designed to achieve approximate solutions in linear/near-linear time.

One problem I worked on, is approximating betweenness centrality. This can be solved in O(nm) time on graphs with n vertices and m edges. A linear time algorithm giving a constant factor approximation to betweenness centrality is of very practical importance.

Shiva Kintali
источник
9

Another reason that we will sometimes look for approximation algorithms for problems in P is because there is some other (non-computational) constraint on our algorithm that precludes an exact solution. One example of such a constraint is differential privacy: differentially private algorithms will necessarily return only approximate solutions. So in recent years, there has been effort to design approximation algorithms for problems that are otherwise easy to solve exactly: e.g. min-cut, low-rank matrix approximation, computing simple summations, etc. Some of these also happen to run more quickly than the exact solution would, but others have running time that is much slower than a non-private exact solution.

Aaron Roth
источник
2
That's another great motivation for approximation. Thanks for pointing that out!
aelguindy
8

I think that the entire area of data streaming and sub-linear algorithms is an effort in this direction. In data streaming, the focus is on solving the problems in o(n) and ideally O(polylog(n)) space whereas in sub-linear algorithms you try to get algorithms with o(n) running time. In both cases, one often needs to compromise with having randomized approximation algorithm.

You can start with the material on this page and this.

Shitikanth
источник
8

The idea of using approximation algorithms for problems in P is very old and ubiquitous. Several problems in numerical linear algebra including the problem of solving linear systems are solved via iterative methods that converge to the answer quickly. However, they don't always find the exact answer. The rate of convergence to achieve a relative or additive ϵ-approximation is analyzed as a function of the problem size and ϵ. There are a number of papers on solving special cases of linear programming problems such as multicommodity flows (and more generally packing and covering LPs) approximately. There is no separate theory of approximation for problems in P vs problems that are in NP (we don't know whether P is equal to NP or not). One can talk about a certain technique being applicable for a certain class of problems. For instance there are general techniques known for approximately solving packing and covering linear programs and some variants.

Chandra Chekuri
источник
4

Dimitris mentions approximating fourier transforms. there is a wide use of this in image compression eg in the JPEG algorithm.[1] although I havent seen a paper that emphasizes this, it seems in some sense a lossy compression[2] (with derivable limits) can also be taken as a P-time approximation algorithm. the approximation aspects are highly developed and finetuned/specialized in the sense they are optimized so that they cannot be perceived by human vision, i.e. the human perception of encoding artifacts (roughly defined as difference between approximation and the lossless compression) is minimized.

this is related to theories about how the human eye perceives or itself actually "approximates" color encoding via some algorithmic-like process. in other words the theoretical approximation scheme/algorithm is actually intentionally designed to match the physical/biological approximation scheme/algorithm (encoded by biological info processing ie neurons in the human visual system).

so, the compression is tightly coupled with the approximation. in JPEG the fourier transform is approximated by the DCT, discrete cosine transform[3]. similar principles are employed over multiple frames for the MPEG video compression standard.[4]

[1] jpeg compression, wikipedia

[2] lossy compression, wikipedia

[3] DCT, discrete cosine transform, wikipedia

[4] MPEG, wikipedia

vzn
источник
1

May be this is not exactly answers your question, because currently I can just remember some heuristics, but I'm sure there are some approximations, because I saw them before.

In some fields like FPT(Fixed parameter tractable) you have a problem, normally in graphs, which can be solved in O(f(k)|G|α), like solving TSP in bounded tree width graphs, Or finding tree decomposition of graphs with small tree width. But actually they aren't good enough and f(k) is too large to be used in real world. So using approximations or heuristics are fine here, for example you can take a look at heuristic for TSP in bounded tree-width graphs, or some algorithms for Maximum Agreement Forest problem and its later approximations/heuristics (simple google shows results in 2010, 2011), or algorithms for finding tree decomposition of graphs.

Saeed
источник