The Fine-Grained Complexity of Andersen's Pointer Analysis

06/02/2020
by   Andreas Pavlogiannis, et al.
0

Pointer analysis is one of the fundamental problems in static program analysis. Given a set of pointers, the task is to produce a useful over-approximation of the memory locations that each pointer may point-to at runtime. The most common formulation is Andersen's Pointer Analysis (APA), defined as an inclusion-based set of m pointer constraints over a set of n pointers. Existing algorithms solve APA in O(n^2· m) time, while it has been conjectured that the problem has no truly sub-cubic algorithm, with a proof so far having remained elusive. It is also well-known that APA can be solved in O(n^2) time under certain sparsity conditions that hold naturally in some settings. Besides these simple bounds, the complexity of the problem has remained poorly understood. In this work we draw a rich fine-grained complexity landscape of APA, and present upper and lower bounds. First, we establish an O(n^3) upper-bound for general APA, improving over O(n^2· m) as n=O(m). Second, we show that sparse instances can be solved in O(n^3/2) time, improving the current O(n^2) bound. Third, we show that even on-demand APA ("may a specific pointer a point to a specific location b?") has an Ω(n^3) (combinatorial) lower bound under standard complexity-theoretic hypotheses. This formally establishes the long-conjectured "cubic bottleneck" of APA, and shows that our O(n^3)-time algorithm is optimal. Fourth, we show that under mild restrictions, APA is solvable in Õ(n^ω) time, where ω<2.373 is the matrix-multiplication coefficient. It is believed that ω=2+o(1), in which case this bound becomes quadratic. Fifth, we show that even under such restrictions, even the on-demand problem has an Ω(n^2) lower bound under standard complexity-theoretic hypotheses, and hence our algorithm is optimal when ω=2+o(1).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset