1 Introduction
FineGrained Complexity deals with the exact complexity of problems, and establishes a web of refined reductions, that preserve exact solving times. While many of the key ideas come from wellknown frameworks (NPcompleteness program, parameterized algorithms and complexity, etc.), this significant new perspective has emerged only recently [23].
The main question posed by this new field is “given a problem known to be solvable in time, is there an such that it can be solved in ?”. In the case such a result exists, we can connect this improvement to algorithmic advances among different problems. In the case it does not, we can establish conditional lower bounds based on this hardness, as is usually done with popular conjectures [6, 24]. Several such conjectures are used, such as the Orthogonal Vectors Conjecture (OVC), the Strong Exponential Time Hypothesis (SETH), the APSP conjecture etc. It has been shown that OVC is implied by SETH [22], and the variants and consequences of both conjectures have been extensively studied.
A main difference between the finegrained approach and classical complexity theory is that all NPcomplete problems form an equivalence class modulo polynomialtime reductions. On the contrary, finegrained reductions produce a much more complex web. Many problems stem from SETH/OVC, others from the 3SUM conjecture (especially Computational Geometry problems [15]), and very few equivalence classes are known (a significant exception is the equivalence class for APSP [3], [24]). These observations raise questions concerning the structural complexity of finegrained reducibility, as has traditionally been the case in other fields of complexity theory: Conditional irreducibility results, the morphology of the equivalence classes formed, finegrained completeness notions, consequences of partitioning problems into classes etc., propose a finegrained structural complexity program.
On the other hand, the parameterized point of view is dominant in theoretical computer science during the last decades. ETH and SETH were introduced in that context, and used widely to establish conditional lower bounds (SETHhardness). Additionally, Fixed Parameter Tractability (FPT)^{1}^{1}1A problem is called Fixed Parameter Tractable, if there is a parameterization such that the problem can be solved in time , for a computable function ., gave a multivariable view of complexity theory, as well as the means to concentrate the hardness of problems to a certain parameter, instead of the input size: many problems have significant improvements on their complexity, if one restricts them to instances having fixed , where is a parameter of the aforementioned problems. This can be viewed as an indication of the structural importance of in each problem.
Similar techniques can be used to differentiate versions of problems, as has been seen recently in the case of finegrained conjectures (e.g. problems on sparse graphs [17]).
1.1 Motivation
The conditional bounds shown by finegrained reductions stem from relating improvements between the conjectured best running times of two problems. This has resulted in an effort to classify problems either through equivalence or hardness via a minimal element (OVhardness)
[11].Additionally, most known finegrained reductions inherently relate the problems in more than trivial ways, also mapping specific parameters of each problem to one another. This could indicate relations between the problems’ property of concentrating hardness to certain parameters.
On the other hand, while parameterized complexity has traditionally been concerned with breaching the gap between polynomial and exponential running times, there has recently been interest in fixedparameter improvements amongst polynomial time problems (sometimes referred to as FPT in P [23, 18]).
The ability of finegrained complexity to express correlations between problems of various running times, comes with some inherent theoretical obstacles. Specifically, the new viewpoint of a problem’s "hardness" is associated with the capability to improve its running time. This results in a "counterintuitive" notion of hard problems, as they frequently correspond to (in classical terms) easy ones. Moreover, the foundation on which this framework is based, allows the computational resources of a finegrained reduction to change depending on the participant problems. This produces vagueness regarding what would be considered a complexity class compatible with such reductions.
Our concern is to surpass the inherent difficulties of the field towards constructive methods that produce generalizable results, as well as to contribute to the effort of establishing structural foundations for the framework. This could result in furthering our understanding of what constitutes difficulty in computation, as well as to structurally define improvability.
1.2 Our Results
We introduce Parameterized FineGrained Reductions (PFGR), a parameterized approach to finegrained reducibility that is consistent with known finegrained reductions and offers (a) tools to study structural correlations between problems, and (b) an extension to the suite of results that are obtained through the reductions. This provides a multivariate approach of analyzing finegrained reductions. Additionally, we give evidence that these reductions connect structural properties of the problems, such as the aforementioned concentration of hardness.
We define a class of problems (FPI) that admit parameterized improvements on their respective conjectured best running time algorithms. To this end, we treat improvements in the same way as finegrained complexity (i.e. excluding polylogarithmic improvements in the running time). This gives us the expressive power to correlate structural properties of problems that belong in different complexity classes.
We prove that this class is closed under the aforementioned parameterized finegrained reductions, which can be used as a tool to produce nontrivial parameterized algorithms (via the reduction process). We present such an application in the case of the reduction from OV to Diameter, in which we use a fixed parameter (with respect to treewidth) algorithm for Diameter to produce a new subquadratic fixed parameter algorithm for OV running in time where is the dimension of the input vectors.
Finally, we use notions from parameterized circuit complexity to analyze membership in this class and introduce a circuit characterization, similar to the one used in the definition of the WHierarchy in parameterized complexity [13].
1.3 Related Work
The finegrained reductions literature has quickly grown over the recent years (see [23] for a survey). The basis for reductions have been some conjectures that are widely considered to be true. Namely, the 3SUM, APSP, HITTING SET & SETH conjectures, which are associated with the respective plausibility for improvement of each problem.
A large portion of known reductions stem from the Orthogonal Vectors problem, which is known to be SETHhard [22], thus OV plays a central role in the structure of the reductions web. Different versions of the OV conjecture were studied, usually parameterized by the dimension [2].
The logical expressibility of problems similar to OV were studied [17], research that created a notion of hardness for problems in firstorder logic, and introduced various equivalence classes ([17, 16, 11]) concerning different versions of the OV problem, with significant applications to other fields (such as Computational Geometry). Additionally, the OV conjecture was studied in restricted models, such as branching programs [20].
Structural implications of finegrained irreducibility and hypotheses were studied, culminated to new conjectures, like NSETH [9]. New implications of refuting the aforementioned hypotheses on Circuit Lower Bounds were discovered [19, 4, 1], and the refutation of SETH would imply stateoftheart lower bounds on nonuniform circuits. Recently, finegrained hypotheses were connected to longstanding questions in complexity theory, such as Derandomization of Complexity Classes [10].
The parameterized analysis of algorithms, one of the most active areas in theoretical and applied computer science, has been frequently used to provide tools in finegrained complexity. As such, new conjectures were formed about the solvability of polynomialtime problems in terms of parameters [5].
In a notable case of similar work [8], the authors analyze the multivariate (parameterized) complexity of the longest common subsequence problem (LCS), taking into account all of the commonly discussed parameters on the problem. As a result, they produce a general conditional lower bound accounting for the conjunction of different parameterized algorithms for LCS: Unless SETH fails, the optimal running time for LCS is where are the aforementioned parameters. Note that our work is in a different direction to this result. Instead of separately reducing SETH to each different parameterized case, we give the means to show correlations between parameters in such reductions, i.e. with this framework one can analyze a single reduction to show multiple dependencies between parameterized improvements for each problem. While this automatically produces several conditional bounds among parameterizations of problems, it proves most useful in the opposite direction, namely to transfer improvements between problems and thus derive new parameterized algorithms.
2 Preliminaries
We denote with , for , the set .
[Generalized Parameterized Languages] Let , and parameterization functions, , . Let denote the corresponding parameterized language. For simplicity, we will use to abbreviate , and to denote an input instance for .
Note here the divergence from the classical definition, that associates each problem with only one parameter [13]. We prefer the generalized version that allows us to describe simultaneously several structural measures of the problem, such as number of variables, number of nodes and more complex ones. For each one of those parameters we assume that there exists an index such that corresponds to the mapping of the input instance to this specific parameter. In this way we can not only isolate and analyze different characteristics of structures but also treat each of these measures individually.
[OV] Define the Orthogonal Vectors problem () as follows: Given two sets , with , are there two vectors , , such that ?
[ApproxDiameter] Given a graph , approximate its diameter, i.e. the quantity , within a factor .
We will also define the notion of treewidth as we will later present a result that utilizes it as a graph parameter.
[Treewidth] A tree decomposition of a graph is a tree, , with nodes (called bags), where each is a subset of , satisfying the following properties:

The union of all sets equals . That is, each graph vertex is contained in at least one tree node.

The tree nodes containing vertex v, form a connected subtree of T.

For every edge in the graph, there is a subset that contains both and .
The width of a tree decomposition is the size of its largest set minus one. The treewidth of a graph is the minimum width among all possible tree decompositions of .
We will utilize the following notions from circuit complexity theory (for more details the reader is referred to Ch. 6 of [7]).
[Circuit Complexity]
The circuitsize complexity of a Boolean function is the minimal size (number of gates) of any circuit computing . The circuitdepth complexity of a Boolean function is the minimal depth of any circuit computing .
We will also use the following definition of FineGrained reductions from [24]. [FineGrained Reduction] Let be nondecreasing functions of . Problem is reducible to problem (denoted ), if for all there exists a , and an algorithm solving with oracle access to such that runs in at most time, making at most oracle queries adaptively (i.e. the instance is a function of ). The sizes for any choice of oracle answers , obey the inequality:
3 Parameterized FineGrained Reductions
In this section we define Parameterized FineGrained Reductions (PFGR), along with some examples of applications, and show their relation to finegrained reductions.
[PFGR] Given problems and with , their respective conjectured best running times: We say if there exists and algorithm R such that

For every there exists a such that R runs in time on inputs of length by making query calls to with query lengths , and , for some constant , and R accepts iff .

For every query , there exists a computable function defined as such that for every
.
The number of calls is specific to the type of reduction used. In the case of adaptive queries, the number of potential calls could exceed exponentially (in the worst case). A reasonable objection could arise here since the mappings are not assumed to have any time restriction. However, are not implemented by the reduction algorithm, and are merely correlating functions of the parameters. As such, they do not affect the running time of the reduction.
Additionally, one would suspect that this definition is a limitation on the original finegrained framework and hence is only satisfied by some of the known reductions. The main problem is that most of the known reductions refer to non parameterized problems. This however can be easily surpassed by our formalization, as we can view these as projections of PFGR (see section 3):
Given a problem and an input instance , the parameterized version of the problem can be produced by extending the input with the computable function that defines each parameter over it. We can now redefine any reductions it took place in, simply replacing the problem with its parameterized version. This essentially provides us with all of the possible parameterizations a problem can have, and uses them as a whole in order to preserve structural characteristics.
While analyzing reductions, some notable cases occur: Firstly, the case of only one query call, as observed in the majority of known finegrained reductions. Secondly, the case where even though many query calls are made, the constructions of the input instance to maintain uniform mappings of the parameters, i.e. . Lastly, the case where the value of each parameter of problem is only related to a single parameter of , i.e. . ^{2}^{2}2this case is especially useful in transferring parameterized improvements, as we will see in section 4.
We provide some examples to further clarify our definition and notation: Consider the wellstudied reduction , presented in [22]. It is apparent that through one call, the number of clauses of the SAT instance corresponds to the dimension of the OV vectors instance, as well as that the number of variables is mapped to the number of vectors via the mapping: . This means that the input instance to OV will contain vectors of dimension . This procedure is summarized in the first row of the following table, as well as other indicative reductions, in the same context. For a more detailed analysis of each reduction see the full version.
Consistency with fine grained complexity
While our framework encapsulates many natural structural properties, there are problems that are finegrained reducible to each other and either do not have an obvious correlation between their structures, or have connections that are not apparent. We will show here that our definition for Parameterized FineGrained Reduction is consistent with those cases, as the set of problems that are reducible to each other via fine grained reductions (denoted as ) and the respective set for Parameterized Fine Grained Reductions (denoted as ) are equivalent.
Let and
Then .

Firstly, given problems and with , their respective conjectured best running times, if , we can simply ignore the parameters involved in the reduction and treat it as a finegrained reduction between A and B, as the time restrictions enforced in both definitions are identical ( bound for the reduction time, and for the calls to problem ). 
Given problems and with , their respective conjectured best running times, if , for every in a given parameterization , for each query call made in the reduction, there exists a computable function such that , and as such .
Proof.
We remind the reader here that for to be considered a parameter of a problem , it has to be the output of a computable function on the input of . Hence, every parameter of has a computable function associated with it. Now, since the reduction producing the instances of is time computable, it can trivially be viewed as a computable function having as domain field the inputs to problem , and range the input instances of it produces. Having these, we can simply take the composition of and each to produce computable functions that produce the aforementioned parameters of . Hence, these parameters can be viewed both as parameters of and parameters of . For these reasons, the finegrained reduction can be viewed as for (ergo having the identity function as ). ∎
4 Fixed Parameter Improvable Problems (FPI)
In this section we define a class of problems that admit parameterized improvements on their conjectured best running times, prove that this class is closed under PFGR, as well as produce new parameterized improvements as an application of this closure.
[FPI] Let be a problem with conjectured best running time . Then, has the FPI property with respect to a set of parameters ^{3}^{3}3in this context, denotes a set of parameterization functions over the input of . (denoted FPI) if there exists an algorithm solving in time, for some and a computable function .
For simplicity, in the case of a single parameter, we denote as the property .
For every NPhard problem that admits an FPT algorithm w.r.t. a parameter , we have that , unless .
Proof.
Since all NPhard problems are conjectured to demand exponential running time, any FPT algorithm that solves them in time for some parameter can be viewed as an improvement to ; the actual improvement in the conjectured running time is in fact much greater than . ∎
The following problems, parameterized with the respective parameter are FPI:

Vertex Cover, Solution size

SAT, Number of clauses

kknapsack,k
[Minimum Necessary Set] Let , be parameterized problems such that . We define as to be the minimum necessary set needed to bound the set of parameters of with respect to g, ergo the parameter set of problem A for which such that for some computable function .
[Closure under PFGR] Let , be parameterized problems. If and , then , where is the minimum necessary set of w.r.t. .
Proof.
It suffices to prove that there exists an algorithm for running in for , parameters of .
Since , then for all there exists a such that is evaluated by an algorithm using as an oracle, and .
Also, , so there is an algorithm computing in . We can use this algorithm to resolve the oracle calls in time .
We can use to describe the running time for B utilizing function h. As such, the total running time of is:
Hence, . ∎
The above result essentially means that any parametric improvement can be carried through a valid PFGR.
4.1 A subquadratic fixedparameter algorithm for OV
We will now provide an analysis of a known reduction from to approxdiameter [21] using the PFGR framework. Specifically, we show that since approxdiameter admits fixed parameter improvements [5] on the treewidth parameter (hence is in FPI), this can be used to provide a fixedparameter improvement on the OV problem.
is PFGreducible to approxdiameter.
Proof.
Note that this reduction is implemented using only one call, and we analyze only one parameter. As such, we will simplify the notation of to .
We begin with the construction given in [21]: Given an instance with sets , as input, we create a graph as follows: for every create a node , for every create a node , and for every create a node , as well as two nodes . For every and , if we add the edge . Similarly, for every and , if , we add the edge . Also, we add the edges for every , for every , for every , for every , and .
It suffices to find which parameter is connected to treewidth via the reduction: As seen in 1(a), the graph produced by the reduction has a very specific structure. That is, all nodes of group are linked exclusively with nodes of group and with node . Similarly, for group we have connections to group and node . Therefore, to produce a tree decomposition of we can leave the nodes of group unrelated with those of group .
Now, the specific connections of nodes from the groups , to the nodes of group can vary, depending on the form of the OV instance. We can however give an upper bound to the treewidth of as shown in 1(b) by copying the whole group in all of the decomposition’s bags. One can check that each component induced by a label is connected, and that all edges of are covered by the given bags. This decomposition is of maximum bag size and hence of width , where is the size of group .
As follows from the definition, is an upper bound for the treewidth of . However, another tree decomposition with smaller width could exist. In order to prove that the treewidth of is exactly , we must show that there exist instances of OV that produce graphs through this reduction, corresponding to treewidth . Depending on the vectors containing a 1 coordinate in the suitable position, we could end up with a graph containing as a minor a complete bipartite graph. Since complete bipartite graphs have treewidth exactly , we can deduce that all graphs produced by this reduction will have treewidth in the worst case (as seen in chapter 10 of [12]).
Since is exactly the dimension of the OV instance producing the graph, we can see that there exists a function that maps the dimension of the OV instance to the treewidth of the  approxdimension.
Therefore, the mapping is approxdiameter ∎
It was shown in [5] that approxdiameter parameterized by treewidth has a subquadratic algorithm. Hence, FPI(approxdiameter,treewidth). Now, as follows by theorem 4 we should expect that FPI(OV,k) where . Equivalently, we would expect a parametric improvement to the running time of OV for the instances that are related to the ones of the 3/2approxDiameter problem of bounded treewidth.
By theorem 4.1 we have that since approxdiameter has a parameterized improvement for fixed treewidth, OV has a subquadratic algorithm for fixed dimension of the vectors.
We will construct a subquadratic fixedparameter algorithm, via the process described above.
Specifically, for the reduction time: The graph constructed contained nodes and edges, which can be constructed in time from the OV instance.
The resulting graph has nodes, and the decision problem of approxdiameter for this graph also gives an answer for the decision problem of the OV instance.
The parameterized complexity of the algorithm solving diameter is , where denotes the treewidth of the graph [5].
Ergo, since (via our reduction) we can use the above to obtain an algorithm for OV running in time:
Through our reduction, this result can be carried out to all problems PFGreducible to OV, such as SAT or Hitting Set. See the full version for the respective analyses of these reductions.
As we have seen, if problem admits parameterized improvements on parameters , then through the reduction this can be translated to improvements on problem and parameters such that , for . However, whether we can locate which parameters constitute or not depends on the invertibility of . In the case is not invertible one can only show the existence of such an algorithm, but not necessarily construct it. Nevertheless, the FPI property still holds through our definition, because we can abuse the notation to interpret each as a parameter of A, as it is a byproduct of the reduction which is an (time) computable function on the input of .
5 Circuit Characterization of FPI
We provide a characterization for FPI using circuit complexity. Specifically, it is known that any circuit of size can be simulated by an algorithm with complexity , thus if one can design a circuit with size smaller than the conjectured complexity of the problem, then this can be translated into a faster algorithm.
As such, having a circuit of size , if we can fix any number of parameters such that the circuit can be seen as having size, we can use this circuit to produce a truly sub algorithm for our problem.
Nevertheless, the smallest circuit solving the problem may differ from the one produced via a simulation of an algorithm^{4}^{4}4which is the only universal way to produce a circuit from an arbitrary algorithm.. This means that an improvement in the size complexity of the circuit may not be enough to be translated into a more effective algorithm via an inverse simulation. In that case, for the improvement in the size of the circuit to be translated to a faster algorithm, it is necessary to exceed this difference. From now on, when referring to a circuit solving a problem, the reader should consider the one produced by the simulation procedure.
As shown in [14], we can simulate any algorithm running in time by a circuit of size . Thus, we can use this as an upper bound on the overall size complexity of the circuit produced, to show that an improvement in the size of the circuit (for some ) is always sufficient.
Let be a problem with conjectured best running time. Then, if and only if for the uniform circuit family of size computing , for each , has size , for a computable function .
Proof.
“”:
If we choose , then for , which is an FPI improvement on the running time of the algorithm, since we can simulate the circuit of size in linear time.
“”:
if there is an algorithm and a parameter set for which the running time is , then we can simulate it with a circuit of size:
In the scope of parameterized complexity, we can transform the addition in the second line into multiplication, since it is equivalent, as seen in [12].
If we choose , then , for . ∎
6 Conclusion
In this work we have introduced a framework for finegrained reductions that can capture a deeper connection between the problems involved, namely, a correlation among their parameters. We have shown that this framework captures the essence of the finegrained approach without restricting the results. As a byproduct of our analysis, we defined and studied the structure of improvable problems, and the implications of finegrained reductions on such problems. Finally, we produced a fixed parameter improvement in the running time of the OV problem by utilizing its parametric correlation to the approxdiameter problem.
A notable discussion in this field, is whether or not this framework can be used to define a complexity class, since FPI as a property has some unusual features. Specifically, the inherent meaning of "hardness" that arises, results in the absence of maximal elements (at least currently) in the partial ordering defined by parameterized finegrained reductions. Additionally, because of the conjectured nature of our notion of improvements, the property is directly related to previous work on each problem. It is possible that a parameterized algorithm may be proven suboptimal in the case a problem’s conjectured best running time is updated, resulting in disproving said property. As such, if problems having this property are considered a class, inclusion in this class could be negated after the fact, which is inconsistent with traditional complexity classes.
Using this framework, one can follow the direction of Theorem 4.1 to produce parameterized improvements via the transitivity of PFGR. This analysis can be done for each reduction in the finegrained reduction web, producing a wide variety of improved algorithms on many interesting problems.
A natural question to consider is the relation between our work and traditional parameterized approach. As seen in Theorem 4, it remains an open problem to find the exact relation between FPI and FPT, that is, to formally characterize the problems in FPT that are not FPI. Additionally, one could potentially utilize the plethora of results available through the framework on parameter tractable or harder problems. All of these results may be translated to our terminology given the appropriate assumptions.
References
 [1] Amir Abboud and Karl Bringmann. Tighter connections between formulasat and shaving logs. In 45th International Colloquium on Automata, Languages, and Programming, ICALP 2018, July 913, 2018, Prague, Czech Republic, pages 8:1–8:18, 2018. URL: https://doi.org/10.4230/LIPIcs.ICALP.2018.8, doi:10.4230/LIPIcs.ICALP.2018.8.
 [2] Amir Abboud, Karl Bringmann, Holger Dell, and Jesper Nederlof. More consequences of falsifying SETH and the orthogonal vectors conjecture. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, Los Angeles, CA, USA, June 2529, 2018, pages 253–266, 2018. URL: https://doi.org/10.1145/3188745.3188938, doi:10.1145/3188745.3188938.
 [3] Amir Abboud, Fabrizio Grandoni, and Virginia Vassilevska Williams. Subcubic equivalences between graph centrality problems, apsp and diameter. In Proceedings of the Twentysixth Annual ACMSIAM Symposium on Discrete Algorithms, SODA ’15, pages 1681–1697, Philadelphia, PA, USA, 2015. Society for Industrial and Applied Mathematics. URL: http://dl.acm.org/citation.cfm?id=2722129.2722241.
 [4] Amir Abboud, Thomas Dueholm Hansen, Virginia Vassilevska Williams, and Ryan Williams. Simulating branching programs with edit distance and friends: or: a polylog shaved is a lower bound made. In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 1821, 2016, pages 375–388, 2016. URL: https://doi.org/10.1145/2897518.2897653, doi:10.1145/2897518.2897653.
 [5] Amir Abboud, Virginia Vassilevska Williams, and Joshua R. Wang. Approximation and fixed parameter subquadratic algorithms for radius and diameter. CoRR, abs/1506.01799, 2015. URL: http://arxiv.org/abs/1506.01799, arXiv:1506.01799.
 [6] Amir Abboud, Virginia Vassilevska Williams, and Huacheng Yu. Matching triangles and basing hardness on an extremely popular conjecture. SIAM J. Comput., 47(3):1098–1122, 2018. URL: https://doi.org/10.1137/15M1050987, doi:10.1137/15M1050987.
 [7] Sanjeev Arora and Boaz Barak. Computational Complexity: A Modern Approach. Cambridge University Press, New York, NY, USA, 1st edition, 2009.
 [8] Karl Bringmann and Marvin Künnemann. Multivariate finegrained complexity of longest common subsequence. In Proceedings of the TwentyNinth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 710, 2018, pages 1216–1235, 2018. URL: https://doi.org/10.1137/1.9781611975031.79, doi:10.1137/1.9781611975031.79.
 [9] Marco L. Carmosino, Jiawei Gao, Russell Impagliazzo, Ivan Mihajlin, Ramamohan Paturi, and Stefan Schneider. Nondeterministic extensions of the strong exponential time hypothesis and consequences for nonreducibility. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science, Cambridge, MA, USA, January 1416, 2016, pages 261–270, 2016. URL: https://doi.org/10.1145/2840728.2840746, doi:10.1145/2840728.2840746.
 [10] Marco L. Carmosino, Russell Impagliazzo, and Manuel Sabin. Finegrained derandomization: From problemcentric to resourcecentric complexity. In 45th International Colloquium on Automata, Languages, and Programming, ICALP 2018, July 913, 2018, Prague, Czech Republic, pages 27:1–27:16, 2018. URL: https://doi.org/10.4230/LIPIcs.ICALP.2018.27, doi:10.4230/LIPIcs.ICALP.2018.27.
 [11] Lijie Chen and Ryan Williams. An equivalence class for orthogonal vectors. In Proceedings of the Thirtieth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 69, 2019, pages 21–40, 2019. URL: https://doi.org/10.1137/1.9781611975482.2, doi:10.1137/1.9781611975482.2.
 [12] Rodney G. Downey and Michael R. Fellows. Fundamentals of Parameterized Complexity. Springer Publishing Company, Incorporated, 2013.
 [13] Rodney G. Downey and Dimitrios M. Thilikos. Confronting intractability via parameters. Computer Science Review, 5(4):279–317, 2011. URL: https://doi.org/10.1016/j.cosrev.2011.09.002, doi:10.1016/j.cosrev.2011.09.002.
 [14] Martin Fürer. The tight deterministic time hierarchy. In Proceedings of the 14th Annual ACM Symposium on Theory of Computing, May 57, 1982, San Francisco, California, USA, pages 8–16, 1982. URL: https://doi.org/10.1145/800070.802172, doi:10.1145/800070.802172.
 [15] Anka Gajentaan and Mark H. Overmars. On a class of o(n) problems in computational geometry. Comput. Geom., 45(4):140–152, 2012. URL: https://doi.org/10.1016/j.comgeo.2011.11.006, doi:10.1016/j.comgeo.2011.11.006.
 [16] Jiawei Gao and Russell Impagliazzo. The finegrained complexity of strengthenings of firstorder logic. Electronic Colloquium on Computational Complexity (ECCC), 26:9, 2019. URL: https://eccc.weizmann.ac.il/report/2019/009.
 [17] Jiawei Gao, Russell Impagliazzo, Antonina Kolokolova, and R. Ryan Williams. Completeness for firstorder properties on sparse structures with algorithmic applications. In Proceedings of the TwentyEighth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 1619, pages 2162–2181, 2017. URL: https://doi.org/10.1137/1.9781611974782.141, doi:10.1137/1.9781611974782.141.
 [18] Archontia C. Giannopoulou, George B. Mertzios, and Rolf Niedermeier. Polynomial fixedparameter algorithms: A case study for longest path on interval graphs. CoRR, abs/1506.01652, 2015. URL: http://arxiv.org/abs/1506.01652, arXiv:1506.01652.
 [19] Hamid Jahanjou, Eric Miles, and Emanuele Viola. Local reductions. In Automata, Languages, and Programming  42nd International Colloquium, ICALP 2015, Kyoto, Japan, July 610, 2015, Proceedings, Part I, pages 749–760, 2015. URL: https://doi.org/10.1007/9783662476727_61, doi:10.1007/9783662476727_61.
 [20] Daniel M. Kane and Richard Ryan Williams. The orthogonal vectors conjecture for branching programs and formulas. In 10th Innovations in Theoretical Computer Science Conference, ITCS 2019, January 1012, 2019, San Diego, California, USA, pages 48:1–48:15, 2019. URL: https://doi.org/10.4230/LIPIcs.ITCS.2019.48, doi:10.4230/LIPIcs.ITCS.2019.48.
 [21] Liam Roditty and Virginia Vassilevska Williams. Fast approximation algorithms for the diameter and radius of sparse graphs. In Proceedings of the Fortyfifth Annual ACM Symposium on Theory of Computing, STOC ’13, pages 515–524, New York, NY, USA, 2013. ACM. URL: http://doi.acm.org/10.1145/2488608.2488673, doi:10.1145/2488608.2488673.
 [22] Ryan Williams. A new algorithm for optimal 2constraint satisfaction and its implications. Theor. Comput. Sci., 348(23):357–365, 2005. URL: https://doi.org/10.1016/j.tcs.2005.09.023, doi:10.1016/j.tcs.2005.09.023.
 [23] Virginia Vassilevska Williams. Hardness of easy problems: Basing hardness on popular conjectures such as the strong exponential time hypothesis (invited talk). In 10th International Symposium on Parameterized and Exact Computation, IPEC 2015, September 1618, 2015, Patras, Greece, pages 17–29, 2015. URL: https://doi.org/10.4230/LIPIcs.IPEC.2015.17, doi:10.4230/LIPIcs.IPEC.2015.17.
 [24] Virginia Vassilevska Williams and R. Ryan Williams. Subcubic equivalences between path, matrix, and triangle problems. J. ACM, 65(5):27:1–27:38, 2018. URL: https://doi.org/10.1145/3186893, doi:10.1145/3186893.
Comments
There are no comments yet.