Fine-Grained Complexity deals with the exact complexity of problems, and establishes a web of refined reductions, that preserve exact solving times. While many of the key ideas come from well-known frameworks (NP-completeness program, parameterized algorithms and complexity, etc.), this significant new perspective has emerged only recently .
The main question posed by this new field is “given a problem known to be solvable in time, is there an such that it can be solved in ?”. In the case such a result exists, we can connect this improvement to algorithmic advances among different problems. In the case it does not, we can establish conditional lower bounds based on this hardness, as is usually done with popular conjectures [6, 24]. Several such conjectures are used, such as the Orthogonal Vectors Conjecture (OVC), the Strong Exponential Time Hypothesis (SETH), the APSP conjecture etc. It has been shown that OVC is implied by SETH , and the variants and consequences of both conjectures have been extensively studied.
A main difference between the fine-grained approach and classical complexity theory is that all NP-complete problems form an equivalence class modulo polynomial-time reductions. On the contrary, fine-grained reductions produce a much more complex web. Many problems stem from SETH/OVC, others from the 3-SUM conjecture (especially Computational Geometry problems ), and very few equivalence classes are known (a significant exception is the equivalence class for APSP , ). These observations raise questions concerning the structural complexity of fine-grained reducibility, as has traditionally been the case in other fields of complexity theory: Conditional irreducibility results, the morphology of the equivalence classes formed, fine-grained completeness notions, consequences of partitioning problems into classes etc., propose a fine-grained structural complexity program.
On the other hand, the parameterized point of view is dominant in theoretical computer science during the last decades. ETH and SETH were introduced in that context, and used widely to establish conditional lower bounds (SETH-hardness). Additionally, Fixed Parameter Tractability (FPT)111A problem is called Fixed Parameter Tractable, if there is a parameterization such that the problem can be solved in time , for a computable function ., gave a multi-variable view of complexity theory, as well as the means to concentrate the hardness of problems to a certain parameter, instead of the input size: many problems have significant improvements on their complexity, if one restricts them to instances having fixed , where is a parameter of the aforementioned problems. This can be viewed as an indication of the structural importance of in each problem.
Similar techniques can be used to differentiate versions of problems, as has been seen recently in the case of fine-grained conjectures (e.g. problems on sparse graphs ).
The conditional bounds shown by fine-grained reductions stem from relating improvements between the conjectured best running times of two problems. This has resulted in an effort to classify problems either through equivalence or hardness via a minimal element (OV-hardness).
Additionally, most known fine-grained reductions inherently relate the problems in more than trivial ways, also mapping specific parameters of each problem to one another. This could indicate relations between the problems’ property of concentrating hardness to certain parameters.
On the other hand, while parameterized complexity has traditionally been concerned with breaching the gap between polynomial and exponential running times, there has recently been interest in fixed-parameter improvements amongst polynomial time problems (sometimes referred to as FPT in P [23, 18]).
The ability of fine-grained complexity to express correlations between problems of various running times, comes with some inherent theoretical obstacles. Specifically, the new viewpoint of a problem’s "hardness" is associated with the capability to improve its running time. This results in a "counter-intuitive" notion of hard problems, as they frequently correspond to (in classical terms) easy ones. Moreover, the foundation on which this framework is based, allows the computational resources of a fine-grained reduction to change depending on the participant problems. This produces vagueness regarding what would be considered a complexity class compatible with such reductions.
Our concern is to surpass the inherent difficulties of the field towards constructive methods that produce generalizable results, as well as to contribute to the effort of establishing structural foundations for the framework. This could result in furthering our understanding of what constitutes difficulty in computation, as well as to structurally define improvability.
1.2 Our Results
We introduce Parameterized Fine-Grained Reductions (PFGR), a parameterized approach to fine-grained reducibility that is consistent with known fine-grained reductions and offers (a) tools to study structural correlations between problems, and (b) an extension to the suite of results that are obtained through the reductions. This provides a multi-variate approach of analyzing fine-grained reductions. Additionally, we give evidence that these reductions connect structural properties of the problems, such as the aforementioned concentration of hardness.
We define a class of problems (FPI) that admit parameterized improvements on their respective conjectured best running time algorithms. To this end, we treat improvements in the same way as fine-grained complexity (i.e. excluding polylogarithmic improvements in the running time). This gives us the expressive power to correlate structural properties of problems that belong in different complexity classes.
We prove that this class is closed under the aforementioned parameterized fine-grained reductions, which can be used as a tool to produce non-trivial parameterized algorithms (via the reduction process). We present such an application in the case of the reduction from OV to Diameter, in which we use a fixed parameter (with respect to treewidth) algorithm for Diameter to produce a new sub-quadratic fixed parameter algorithm for OV running in time where is the dimension of the input vectors.
Finally, we use notions from parameterized circuit complexity to analyze membership in this class and introduce a circuit characterization, similar to the one used in the definition of the W-Hierarchy in parameterized complexity .
1.3 Related Work
The fine-grained reductions literature has quickly grown over the recent years (see  for a survey). The basis for reductions have been some conjectures that are widely considered to be true. Namely, the 3SUM, APSP, HITTING SET & SETH conjectures, which are associated with the respective plausibility for improvement of each problem.
A large portion of known reductions stem from the Orthogonal Vectors problem, which is known to be SETH-hard , thus OV plays a central role in the structure of the reductions web. Different versions of the OV conjecture were studied, usually parameterized by the dimension .
The logical expressibility of problems similar to OV were studied , research that created a notion of hardness for problems in first-order logic, and introduced various equivalence classes ([17, 16, 11]) concerning different versions of the OV problem, with significant applications to other fields (such as Computational Geometry). Additionally, the OV conjecture was studied in restricted models, such as branching programs .
Structural implications of fine-grained irreducibility and hypotheses were studied, culminated to new conjectures, like NSETH . New implications of refuting the aforementioned hypotheses on Circuit Lower Bounds were discovered [19, 4, 1], and the refutation of SETH would imply state-of-the-art lower bounds on non-uniform circuits. Recently, fine-grained hypotheses were connected to long-standing questions in complexity theory, such as Derandomization of Complexity Classes .
The parameterized analysis of algorithms, one of the most active areas in theoretical and applied computer science, has been frequently used to provide tools in fine-grained complexity. As such, new conjectures were formed about the solvability of polynomial-time problems in terms of parameters .
In a notable case of similar work , the authors analyze the multivariate (parameterized) complexity of the longest common subsequence problem (LCS), taking into account all of the commonly discussed parameters on the problem. As a result, they produce a general conditional lower bound accounting for the conjunction of different parameterized algorithms for LCS: Unless SETH fails, the optimal running time for LCS is where are the aforementioned parameters. Note that our work is in a different direction to this result. Instead of separately reducing SETH to each different parameterized case, we give the means to show correlations between parameters in such reductions, i.e. with this framework one can analyze a single reduction to show multiple dependencies between parameterized improvements for each problem. While this automatically produces several conditional bounds among parameterizations of problems, it proves most useful in the opposite direction, namely to transfer improvements between problems and thus derive new parameterized algorithms.
We denote with , for , the set .
[Generalized Parameterized Languages] Let , and parameterization functions, , . Let denote the corresponding parameterized language. For simplicity, we will use to abbreviate , and to denote an input instance for .
Note here the divergence from the classical definition, that associates each problem with only one parameter . We prefer the generalized version that allows us to describe simultaneously several structural measures of the problem, such as number of variables, number of nodes and more complex ones. For each one of those parameters we assume that there exists an index such that corresponds to the mapping of the input instance to this specific parameter. In this way we can not only isolate and analyze different characteristics of structures but also treat each of these measures individually.
[OV] Define the Orthogonal Vectors problem () as follows: Given two sets , with , are there two vectors , , such that ?
[-Approx-Diameter] Given a graph , approximate its diameter, i.e. the quantity , within a factor .
We will also define the notion of treewidth as we will later present a result that utilizes it as a graph parameter.
[Treewidth] A tree decomposition of a graph is a tree, , with nodes (called bags), where each is a subset of , satisfying the following properties:
The union of all sets equals . That is, each graph vertex is contained in at least one tree node.
The tree nodes containing vertex v, form a connected subtree of T.
For every edge in the graph, there is a subset that contains both and .
The width of a tree decomposition is the size of its largest set minus one. The treewidth of a graph is the minimum width among all possible tree decompositions of .
We will utilize the following notions from circuit complexity theory (for more details the reader is referred to Ch. 6 of ).
The circuit-size complexity of a Boolean function is the minimal size (number of gates) of any circuit computing . The circuit-depth complexity of a Boolean function is the minimal depth of any circuit computing .
We will also use the following definition of Fine-Grained reductions from . [Fine-Grained Reduction] Let be nondecreasing functions of . Problem is -reducible to problem (denoted ), if for all there exists a , and an algorithm solving with oracle access to such that runs in at most time, making at most oracle queries adaptively (i.e. the instance is a function of ). The sizes for any choice of oracle answers , obey the inequality:
3 Parameterized Fine-Grained Reductions
In this section we define Parameterized Fine-Grained Reductions (PFGR), along with some examples of applications, and show their relation to fine-grained reductions.
[PFGR] Given problems and with , their respective conjectured best running times: We say if there exists and algorithm R such that
For every there exists a such that R runs in time on inputs of length by making query calls to with query lengths , and , for some constant , and R accepts iff .
For every query , there exists a computable function defined as such that for every
The number of calls is specific to the type of reduction used. In the case of adaptive queries, the number of potential calls could exceed exponentially (in the worst case). A reasonable objection could arise here since the mappings are not assumed to have any time restriction. However, are not implemented by the reduction algorithm, and are merely correlating functions of the parameters. As such, they do not affect the running time of the reduction.
Additionally, one would suspect that this definition is a limitation on the original fine-grained framework and hence is only satisfied by some of the known reductions. The main problem is that most of the known reductions refer to non parameterized problems. This however can be easily surpassed by our formalization, as we can view these as projections of PFGR (see section 3):
Given a problem and an input instance , the parameterized version of the problem can be produced by extending the input with the computable function that defines each parameter over it. We can now redefine any reductions it took place in, simply replacing the problem with its parameterized version. This essentially provides us with all of the possible parameterizations a problem can have, and uses them as a whole in order to preserve structural characteristics.
While analyzing reductions, some notable cases occur: Firstly, the case of only one query call, as observed in the majority of known fine-grained reductions. Secondly, the case where even though many query calls are made, the constructions of the input instance to maintain uniform mappings of the parameters, i.e. . Lastly, the case where the value of each parameter of problem is only related to a single parameter of , i.e. . 222this case is especially useful in transferring parameterized improvements, as we will see in section 4.
We provide some examples to further clarify our definition and notation: Consider the well-studied reduction , presented in . It is apparent that through one call, the number of clauses of the SAT instance corresponds to the dimension of the OV vectors instance, as well as that the number of variables is mapped to the number of vectors via the mapping: . This means that the input instance to OV will contain vectors of dimension . This procedure is summarized in the first row of the following table, as well as other indicative reductions, in the same context. For a more detailed analysis of each reduction see the full version.
Consistency with fine grained complexity
While our framework encapsulates many natural structural properties, there are problems that are fine-grained reducible to each other and either do not have an obvious correlation between their structures, or have connections that are not apparent. We will show here that our definition for Parameterized Fine-Grained Reduction is consistent with those cases, as the set of problems that are reducible to each other via fine grained reductions (denoted as ) and the respective set for Parameterized Fine Grained Reductions (denoted as ) are equivalent.
Firstly, given problems and with , their respective conjectured best running times, if , we can simply ignore the parameters involved in the reduction and treat it as a fine-grained reduction between A and B, as the time restrictions enforced in both definitions are identical ( bound for the reduction time, and for the calls to problem ).
Given problems and with , their respective conjectured best running times, if , for every in a given parameterization , for each query call made in the reduction, there exists a computable function such that , and as such .
We remind the reader here that for to be considered a parameter of a problem , it has to be the output of a computable function on the input of . Hence, every parameter of has a computable function associated with it. Now, since the reduction producing the instances of is -time computable, it can trivially be viewed as a computable function having as domain field the inputs to problem , and range the input instances of it produces. Having these, we can simply take the composition of and each to produce computable functions that produce the aforementioned parameters of . Hence, these parameters can be viewed both as parameters of and parameters of . For these reasons, the fine-grained reduction can be viewed as for (ergo having the identity function as ). ∎
4 Fixed Parameter Improvable Problems (FPI)
In this section we define a class of problems that admit parameterized improvements on their conjectured best running times, prove that this class is closed under PFGR, as well as produce new parameterized improvements as an application of this closure.
[FPI] Let be a problem with conjectured best running time . Then, has the FPI property with respect to a set of parameters 333in this context, denotes a set of parameterization functions over the input of . (denoted FPI) if there exists an algorithm solving in time, for some and a computable function .
For simplicity, in the case of a single parameter, we denote as the property .
For every NP-hard problem that admits an FPT algorithm w.r.t. a parameter , we have that , unless .
Since all NP-hard problems are conjectured to demand exponential running time, any FPT algorithm that solves them in time for some parameter can be viewed as an improvement to ; the actual improvement in the conjectured running time is in fact much greater than . ∎
The following problems, parameterized with the respective parameter are FPI:
Vertex Cover, Solution size
SAT, Number of clauses
[Minimum Necessary Set] Let , be parameterized problems such that . We define as to be the minimum necessary set needed to bound the set of parameters of with respect to g, ergo the parameter set of problem A for which such that for some computable function .
[Closure under PFGR] Let , be parameterized problems. If and , then , where is the minimum necessary set of w.r.t. .
It suffices to prove that there exists an algorithm for running in for , parameters of .
Since , then for all there exists a such that is evaluated by an algorithm using as an oracle, and .
Also, , so there is an algorithm computing in . We can use this algorithm to resolve the oracle calls in time .
We can use to describe the running time for B utilizing function h. As such, the total running time of is:
Hence, . ∎
The above result essentially means that any parametric improvement can be carried through a valid PFGR.
4.1 A subquadratic fixed-parameter algorithm for OV
We will now provide an analysis of a known reduction from to -approx-diameter  using the PFGR framework. Specifically, we show that since -approx-diameter admits fixed parameter improvements  on the treewidth parameter (hence is in FPI), this can be used to provide a fixed-parameter improvement on the OV problem.
is PFG-reducible to -approx-diameter.
Note that this reduction is implemented using only one call, and we analyze only one parameter. As such, we will simplify the notation of to .
We begin with the construction given in : Given an instance with sets , as input, we create a graph as follows: for every create a node , for every create a node , and for every create a node , as well as two nodes . For every and , if we add the edge . Similarly, for every and , if , we add the edge . Also, we add the edges for every , for every , for every , for every , and .
It suffices to find which parameter is connected to treewidth via the reduction: As seen in 1-(a), the graph produced by the reduction has a very specific structure. That is, all nodes of group are linked exclusively with nodes of group and with node . Similarly, for group we have connections to group and node . Therefore, to produce a tree decomposition of we can leave the nodes of group unrelated with those of group .
Now, the specific connections of nodes from the groups , to the nodes of group can vary, depending on the form of the OV instance. We can however give an upper bound to the treewidth of as shown in 1-(b) by copying the whole group in all of the decomposition’s bags. One can check that each component induced by a label is connected, and that all edges of are covered by the given bags. This decomposition is of maximum bag size and hence of width , where is the size of group .
As follows from the definition, is an upper bound for the treewidth of . However, another tree decomposition with smaller width could exist. In order to prove that the treewidth of is exactly , we must show that there exist instances of OV that produce graphs through this reduction, corresponding to treewidth . Depending on the vectors containing a 1 coordinate in the suitable position, we could end up with a graph containing as a minor a complete bipartite graph. Since complete bipartite graphs have treewidth exactly , we can deduce that all graphs produced by this reduction will have treewidth in the worst case (as seen in chapter 10 of ).
Since is exactly the dimension of the OV instance producing the graph, we can see that there exists a function that maps the dimension of the OV instance to the treewidth of the - approx-dimension.
Therefore, the mapping is -approx-diameter ∎
It was shown in  that -approx-diameter parameterized by treewidth has a subquadratic algorithm. Hence, FPI(-approx-diameter,treewidth). Now, as follows by theorem 4 we should expect that FPI(OV,k) where . Equivalently, we would expect a parametric improvement to the running time of OV for the instances that are related to the ones of the 3/2-approx-Diameter problem of bounded treewidth.
By theorem 4.1 we have that since -approx-diameter has a parameterized improvement for fixed treewidth, OV has a subquadratic algorithm for fixed dimension of the vectors.
We will construct a subquadratic fixed-parameter algorithm, via the process described above.
Specifically, for the reduction time: The graph constructed contained nodes and edges, which can be constructed in time from the OV instance.
The resulting graph has nodes, and the decision problem of -approx-diameter for this graph also gives an answer for the decision problem of the OV instance.
The parameterized complexity of the algorithm solving diameter is , where denotes the treewidth of the graph .
Ergo, since (via our reduction) we can use the above to obtain an algorithm for OV running in time:
Through our reduction, this result can be carried out to all problems PFG-reducible to OV, such as SAT or Hitting Set. See the full version for the respective analyses of these reductions.
As we have seen, if problem admits parameterized improvements on parameters , then through the reduction this can be translated to improvements on problem and parameters such that , for . However, whether we can locate which parameters constitute or not depends on the invertibility of . In the case is not invertible one can only show the existence of such an algorithm, but not necessarily construct it. Nevertheless, the FPI property still holds through our definition, because we can abuse the notation to interpret each as a parameter of A, as it is a byproduct of the reduction which is an (-time) computable function on the input of .
5 Circuit Characterization of FPI
We provide a characterization for FPI using circuit complexity. Specifically, it is known that any circuit of size can be simulated by an algorithm with complexity , thus if one can design a circuit with size smaller than the conjectured complexity of the problem, then this can be translated into a faster algorithm.
As such, having a circuit of size , if we can fix any number of parameters such that the circuit can be seen as having size, we can use this circuit to produce a truly sub- algorithm for our problem.
Nevertheless, the smallest circuit solving the problem may differ from the one produced via a simulation of an algorithm444which is the only universal way to produce a circuit from an arbitrary algorithm.. This means that an improvement in the size complexity of the circuit may not be enough to be translated into a more effective algorithm via an inverse simulation. In that case, for the improvement in the size of the circuit to be translated to a faster algorithm, it is necessary to exceed this difference. From now on, when referring to a circuit solving a problem, the reader should consider the one produced by the simulation procedure.
As shown in , we can simulate any algorithm running in time by a circuit of size . Thus, we can use this as an upper bound on the overall size complexity of the circuit produced, to show that an improvement in the size of the circuit (for some ) is always sufficient.
Let be a problem with conjectured best running time. Then, if and only if for the uniform circuit family of size computing , for each , has size , for a computable function .
If we choose , then for , which is an FPI improvement on the running time of the algorithm, since we can simulate the circuit of size in linear time.
if there is an algorithm and a parameter set for which the running time is , then we can simulate it with a circuit of size:
In the scope of parameterized complexity, we can transform the addition in the second line into multiplication, since it is equivalent, as seen in .
If we choose , then , for . ∎
In this work we have introduced a framework for fine-grained reductions that can capture a deeper connection between the problems involved, namely, a correlation among their parameters. We have shown that this framework captures the essence of the fine-grained approach without restricting the results. As a byproduct of our analysis, we defined and studied the structure of improvable problems, and the implications of fine-grained reductions on such problems. Finally, we produced a fixed parameter improvement in the running time of the OV problem by utilizing its parametric correlation to the -approx-diameter problem.
A notable discussion in this field, is whether or not this framework can be used to define a complexity class, since FPI as a property has some unusual features. Specifically, the inherent meaning of "hardness" that arises, results in the absence of maximal elements (at least currently) in the partial ordering defined by parameterized fine-grained reductions. Additionally, because of the conjectured nature of our notion of improvements, the property is directly related to previous work on each problem. It is possible that a parameterized algorithm may be proven sub-optimal in the case a problem’s conjectured best running time is updated, resulting in disproving said property. As such, if problems having this property are considered a class, inclusion in this class could be negated after the fact, which is inconsistent with traditional complexity classes.
Using this framework, one can follow the direction of Theorem 4.1 to produce parameterized improvements via the transitivity of PFGR. This analysis can be done for each reduction in the fine-grained reduction web, producing a wide variety of improved algorithms on many interesting problems.
A natural question to consider is the relation between our work and traditional parameterized approach. As seen in Theorem 4, it remains an open problem to find the exact relation between FPI and FPT, that is, to formally characterize the problems in FPT that are not FPI. Additionally, one could potentially utilize the plethora of results available through the framework on parameter tractable or harder problems. All of these results may be translated to our terminology given the appropriate assumptions.
-  Amir Abboud and Karl Bringmann. Tighter connections between formula-sat and shaving logs. In 45th International Colloquium on Automata, Languages, and Programming, ICALP 2018, July 9-13, 2018, Prague, Czech Republic, pages 8:1–8:18, 2018. URL: https://doi.org/10.4230/LIPIcs.ICALP.2018.8, doi:10.4230/LIPIcs.ICALP.2018.8.
-  Amir Abboud, Karl Bringmann, Holger Dell, and Jesper Nederlof. More consequences of falsifying SETH and the orthogonal vectors conjecture. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 25-29, 2018, pages 253–266, 2018. URL: https://doi.org/10.1145/3188745.3188938, doi:10.1145/3188745.3188938.
-  Amir Abboud, Fabrizio Grandoni, and Virginia Vassilevska Williams. Subcubic equivalences between graph centrality problems, apsp and diameter. In Proceedings of the Twenty-sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’15, pages 1681–1697, Philadelphia, PA, USA, 2015. Society for Industrial and Applied Mathematics. URL: http://dl.acm.org/citation.cfm?id=2722129.2722241.
-  Amir Abboud, Thomas Dueholm Hansen, Virginia Vassilevska Williams, and Ryan Williams. Simulating branching programs with edit distance and friends: or: a polylog shaved is a lower bound made. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 375–388, 2016. URL: https://doi.org/10.1145/2897518.2897653, doi:10.1145/2897518.2897653.
-  Amir Abboud, Virginia Vassilevska Williams, and Joshua R. Wang. Approximation and fixed parameter subquadratic algorithms for radius and diameter. CoRR, abs/1506.01799, 2015. URL: http://arxiv.org/abs/1506.01799, arXiv:1506.01799.
-  Amir Abboud, Virginia Vassilevska Williams, and Huacheng Yu. Matching triangles and basing hardness on an extremely popular conjecture. SIAM J. Comput., 47(3):1098–1122, 2018. URL: https://doi.org/10.1137/15M1050987, doi:10.1137/15M1050987.
-  Sanjeev Arora and Boaz Barak. Computational Complexity: A Modern Approach. Cambridge University Press, New York, NY, USA, 1st edition, 2009.
-  Karl Bringmann and Marvin Künnemann. Multivariate fine-grained complexity of longest common subsequence. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7-10, 2018, pages 1216–1235, 2018. URL: https://doi.org/10.1137/1.9781611975031.79, doi:10.1137/1.9781611975031.79.
-  Marco L. Carmosino, Jiawei Gao, Russell Impagliazzo, Ivan Mihajlin, Ramamohan Paturi, and Stefan Schneider. Nondeterministic extensions of the strong exponential time hypothesis and consequences for non-reducibility. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science, Cambridge, MA, USA, January 14-16, 2016, pages 261–270, 2016. URL: https://doi.org/10.1145/2840728.2840746, doi:10.1145/2840728.2840746.
-  Marco L. Carmosino, Russell Impagliazzo, and Manuel Sabin. Fine-grained derandomization: From problem-centric to resource-centric complexity. In 45th International Colloquium on Automata, Languages, and Programming, ICALP 2018, July 9-13, 2018, Prague, Czech Republic, pages 27:1–27:16, 2018. URL: https://doi.org/10.4230/LIPIcs.ICALP.2018.27, doi:10.4230/LIPIcs.ICALP.2018.27.
-  Lijie Chen and Ryan Williams. An equivalence class for orthogonal vectors. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 21–40, 2019. URL: https://doi.org/10.1137/1.9781611975482.2, doi:10.1137/1.9781611975482.2.
-  Rodney G. Downey and Michael R. Fellows. Fundamentals of Parameterized Complexity. Springer Publishing Company, Incorporated, 2013.
-  Rodney G. Downey and Dimitrios M. Thilikos. Confronting intractability via parameters. Computer Science Review, 5(4):279–317, 2011. URL: https://doi.org/10.1016/j.cosrev.2011.09.002, doi:10.1016/j.cosrev.2011.09.002.
-  Martin Fürer. The tight deterministic time hierarchy. In Proceedings of the 14th Annual ACM Symposium on Theory of Computing, May 5-7, 1982, San Francisco, California, USA, pages 8–16, 1982. URL: https://doi.org/10.1145/800070.802172, doi:10.1145/800070.802172.
-  Anka Gajentaan and Mark H. Overmars. On a class of o(n) problems in computational geometry. Comput. Geom., 45(4):140–152, 2012. URL: https://doi.org/10.1016/j.comgeo.2011.11.006, doi:10.1016/j.comgeo.2011.11.006.
-  Jiawei Gao and Russell Impagliazzo. The fine-grained complexity of strengthenings of first-order logic. Electronic Colloquium on Computational Complexity (ECCC), 26:9, 2019. URL: https://eccc.weizmann.ac.il/report/2019/009.
-  Jiawei Gao, Russell Impagliazzo, Antonina Kolokolova, and R. Ryan Williams. Completeness for first-order properties on sparse structures with algorithmic applications. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 16-19, pages 2162–2181, 2017. URL: https://doi.org/10.1137/1.9781611974782.141, doi:10.1137/1.9781611974782.141.
-  Archontia C. Giannopoulou, George B. Mertzios, and Rolf Niedermeier. Polynomial fixed-parameter algorithms: A case study for longest path on interval graphs. CoRR, abs/1506.01652, 2015. URL: http://arxiv.org/abs/1506.01652, arXiv:1506.01652.
-  Hamid Jahanjou, Eric Miles, and Emanuele Viola. Local reductions. In Automata, Languages, and Programming - 42nd International Colloquium, ICALP 2015, Kyoto, Japan, July 6-10, 2015, Proceedings, Part I, pages 749–760, 2015. URL: https://doi.org/10.1007/978-3-662-47672-7_61, doi:10.1007/978-3-662-47672-7_61.
-  Daniel M. Kane and Richard Ryan Williams. The orthogonal vectors conjecture for branching programs and formulas. In 10th Innovations in Theoretical Computer Science Conference, ITCS 2019, January 10-12, 2019, San Diego, California, USA, pages 48:1–48:15, 2019. URL: https://doi.org/10.4230/LIPIcs.ITCS.2019.48, doi:10.4230/LIPIcs.ITCS.2019.48.
-  Liam Roditty and Virginia Vassilevska Williams. Fast approximation algorithms for the diameter and radius of sparse graphs. In Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing, STOC ’13, pages 515–524, New York, NY, USA, 2013. ACM. URL: http://doi.acm.org/10.1145/2488608.2488673, doi:10.1145/2488608.2488673.
-  Ryan Williams. A new algorithm for optimal 2-constraint satisfaction and its implications. Theor. Comput. Sci., 348(2-3):357–365, 2005. URL: https://doi.org/10.1016/j.tcs.2005.09.023, doi:10.1016/j.tcs.2005.09.023.
-  Virginia Vassilevska Williams. Hardness of easy problems: Basing hardness on popular conjectures such as the strong exponential time hypothesis (invited talk). In 10th International Symposium on Parameterized and Exact Computation, IPEC 2015, September 16-18, 2015, Patras, Greece, pages 17–29, 2015. URL: https://doi.org/10.4230/LIPIcs.IPEC.2015.17, doi:10.4230/LIPIcs.IPEC.2015.17.
-  Virginia Vassilevska Williams and R. Ryan Williams. Subcubic equivalences between path, matrix, and triangle problems. J. ACM, 65(5):27:1–27:38, 2018. URL: https://doi.org/10.1145/3186893, doi:10.1145/3186893.