The machinery of problem parameterization is a recently proposed approach to address intractable computational issues. By taking advantage of a parameter’s small values, fixed-parameter tractable algorithms were used to solve a variety of difficult computational problems. One of the widely used methods for tackling NP-hardness in practice is pre-processing of polynomial-time (kernelization). In parameterized complexity, a natural mathematical framework provides guarantees of the performance of pre-processing rules. Many NP-hard problems can be solved by algorithms running in uniformly polynomial time, i.e. time for some function , if some part of the input of length is taken as a fixed parameter to form fixed-parameter problems. These problems are called fixed-parameter tractable or FPT for short.
Consider, for example, the well studied -Vertex Cover problem where we are given a graph and a positive integer as input, and the goal is to check if there is a vertex cover of size at most . It is known that the problem is NP-complete, however, when it is viewed in a parameterized aspect, it can be solved in time , which is efficient for instances with small parameter values. Another useful example is Clique problem in which we are looking for a clique on vertices, i.e. a set of vertices with an edge between each pair of them. Clique is unlikely to be FPT when the solution size is the parameter, nevertheless, it is FPT when the parameter is the maximum degree of of the input graph. Therefore, the more we are aware of our input instances, the more algorithmically we can exploit! Downey and Fellows  have set up a general framework to study the complexity of fixed-parameter problems. Notable examples of breakthrough progresses in the subject can be mentioned as Robertson and Seymour’s algorithm for solving the subgraph homomorphism and minor containment problems , Bodlaender’s algorithm for finding tree-decompositions of graphs with treewidth , and Courcelle’s algorithms  for solving problems expressible in monadic second-order logic on graphs with treewidth  in all of which the hidden constant is a function of the parameter . In this paper, we consider the parameterized complexity of the problem of partitioning a graph into relatively inter-sparse pieces in the sense that not too many edges cross between them.
Finding dense or sparse areas of graphs is a primary computational problem with many important applications in different fields of science such as computational biology and social network analysis . In this work, we study the problem of finding a -partition of the vertices of a graph where each part has a low edge expansion.
More precisely, let be a graph endowed with a weight function and let be a subset of vertices. The edge expansion of is defined as
where stands for the set of all edges in with exactly one endpoint in . We drop the subscript when there is no ambiguity. The Sparsest Cut Problem asks for a subset with at most vertices which have the least edge expansion. One may define
Also, the decision problem can be stated as follows.
Input: A graph , a weight function and a rational number .
Question: Does there exist a subset , where and ?
The unweighted version of the problem is when all edge weights are equal to one. The value of the sparsest cut is also called the conductance or the Cheeger constant of . Sparsest Cut Problem has been highly influential in the study of algorithms and complexity in both theoretical and applied aspects. It has many applications in graph clustering [43, 27], image segmentation 
, analysis of Markov chains[42, 25] and expander graphs [26, 23, 32].
The mean edge expansion of is defined as and Mean Sparsest Cut problem seeks for a subset with minimum mean edge expansion. In the literature, there is also a non-uniform version of the problem where another graph endowed with a demand function is given in the input and the edge expansion of is defined as . In this paper, we essentially focus on the uniform sparsest cut problem and its generalizations which will be introduced as follows.
A natural generalization of Sparsest Cut problem is to find a -partition of such that the worst edge expansion of the parts is minimized. More precisely, let be an integer and be a partition of into subsets. Define,
where the minimum is taken over all -partitions of . This generalization is called -Sparsest Cut problem. One may see that Sparsest Cut problem is the special case when . The decision version of the problem is defined as follows.
-Sparsest Cut (SC)
Input: A graph , a nonnegative integer and a rational number .
Question: Does there exist a -partition of where the edge expansion of each part is at most , i.e. for every , ?
Another generalization of Sparsest Cut problem is when we restrict the search space into small subsets of . Let be a positive integer. The Small-Set Expansion problem seeks for a subset of size at most with the minimum edge expansion. Let us define,
-Small-Set Expansion (SSE)
Input: A graph , a positive integer and a rational number .
Question: Does there exist a subset of size at most such that ?
If is a -partition of , then there is some , where . Therefore, for every integer , we have
Also, note that
Thus, equality holds in (1) when .
Small-Set Expansion problem is related to a very important conjecture called Small-Set Expansion Hypothesis (SSEH). Let be a undirected -regular graph on vertices. The SSEH states that for any constant , there is some such that it is NP-hard to distinguish the cases of and . It is known that SSEH implies the Unique Game Conjecture of Khot (for more information, see [39, 28]).
Two of the classic results regarding the Sparsest Cut problem () are Leighton and Rao’s approximation algorithm , and Arora, Rao, and Vazirani’s approximation algorithm . About the Mean Sparsest Cut problem, Bonsma et al.  showed that the problem can be solved in cubic time for unweighted graphs of bounded treewidth. For graphs of clique-width the same authors showed that the problem can be solved in time where is the number of vertices of the input graph. About the non-uniform version of the problem, authors in  presented a 2-approximation algorithm that runs in time , where is the treewidth of the graph.
Related to generalized -Sparsest Cut problem, Lee et al.  proved a higher-order Cheeger’s inequality asserting that , where is the
th eigenvalue of the Laplacian matrix of the graph. Daneshgar et al. [15, 16] showed that SC is NP-hard even for trees and gave an time algorithm for weighted trees when the search space is relaxed to all -subpartitions. Alimi et al.  gave an approximation algorithm for the problem and Louis et al.  provided a polynomial approximation algorithm which outputs a -partition of the vertex set such that each piece has expansion at most times OPT.
While Sparsest Cut is looking for a cut in a graph having the minimum inter-density, finding subgraphs of maximum intra-density is also a very well studied problem. A prominent instance of such problems is Clique, which asks for a complete subgraph of order and is W-hard for the parameter and fixed-parameter tractable with respect to the dual parameter . There are many different definitions of what a dense subgraph is  and for almost all of these formulations, the corresponding computational problems are NP-hard. In the Densest -Subgraph problem (DS) we are given a graph and an integer , and we are asked for a subset of vertices such that the number of edges induced by is maximized.
The problem DS is NP-hard and W-hard for , as it is a generalization of Clique. Furthermore, DS is NP-hard even in graphs with maximum degree three and degeneracy two . Asahiro et al.  gave a 2-approximation algorithm for DS in linear time using a simple greedy algorithm. Cai et al. gave a randomized fixed-parameter algorithm for DS on bounded-degree graphs running in time , where is the maximum degree of the input graph and is some function depending only on and . Bourgeois et al.  present two FPT algorithms for DS where consider as parameter respectively the treewidth and the size of the minimum vertex cover of the input graph.
In this paper, we study the parameterized complexity of -Sparsest Cut problem and -Small-Set Expansion problem where we focus on graphs of bounded treewidth, bounded vertex cover, bounded degree and degeneracy. We divide the investigation into weighted and unweighted graphs (i.e. when there is a weight function on edges, or all edge weights are equal to one). Table 1 gives an overview of our results. The problem SC shows different complexity behavior in weighted and unweighted versions. For instance, in unweighted graphs, for every fixed , SC is FPT with the parameter treewidth and the minimum vertex cover, nonetheless, in weighted version, it becomes NP-hard.
We begin by presenting our results for weighted versions of -SC and SSE in Section 3. We prove that SSE and SC are FPT with respect to the treewidth and the minimum vertex cover and SC, for , is NP-hard even when these parameters are bounded. Also, we prove hardness of SC and SSE for the parameters and the maximum degree and the degeneracy of the input graph.
In Section 4, we investigate the unweighted version of SC and we prove that for every fixed , the problem SC is FPT with respect to the treewidth and the minimum vertex cover. Although in the running time of both algorithms, is in the exponent, we prove that it is unlikely to improve it to by showing that unweighted SC is W-hard for the parameters and treewidth, combined.
Section 5 begins with proving W-hardness of SSE for the parameter . The section also contains a randomized FPT algorithm for SSE w.r.t. and the maximum degree of the input graph, combined.
FPT for (Theorem 2)
NP-hard for every and (Theorem 3)
FPT for (Theorem 1)
NP-hard for every and (Corollary 4)
for every and ()
|NP-hard for every and (Theorem 5)|
|NP-hard for every and (Theorem 5)|
|Unweighted SC||(,)||W-hard (Theorem 7)|
|FPT for fixed (Theorem 8)|
|FPT for fixed (Theorem 9)|
|SSE||W-hard even for unweighted (Theorem 10)|
|NP-hard for every (Corollary 6)|
|(,)||FPT (Theorem 11)|
|FPT (Theorem 1)|
|FPT (Theorem 2)|
All problems are considered on an undirected graph . We denote the open neighborhood of a vertex in by . The size of is called the degree of and the maximum degree of all vertices is denoted by . Given a subset , the subgraph of induced by is denoted by . A graph is called -degenerate if every induced subgraph of has a vertex of degree at most . The minimum number for which is -generate is called degeneracy of . It is easy to see that every -degenerate graph admits an acyclic orientation such that the outdegree of each vertex is at most . Many interesting families of graphs are -degenerate for some fixed constant . A vertex cover of is a subset of vertices such that every edge in is incident with at least one vertex in . The vertex cover number of is the minimum size of a vertex cover of .
A tree decomposition of a graph is a pair where is a tree whose every node is assigned a vertex subset , called a bag, such that the following three conditions hold:
. In other words, every vertex of is in at least one bag.
For every , there exists a node of such that bag contains both and .
For every ,the set , i.e., the set of nodes whose corresponding bags contain , induces a connected subtree of .
The width of tree decomposition is defined as . The treewidth of a graph , denoted by , is the minimum possible width of a tree decomposition of . To distinguish between the vertices of the decomposition tree and the vertices of the graph , we will refer to the vertices of as nodes. The treewidth of an -vertex clique is and of a complete bipartite graph is . It is known that finding treewidth of a given graph is NP-hard . However, deciding whether there is a tree decomposition of width for a given graph on vertices can be done in . A tree decomposition is a nice tree decomposition where is a binary tree rooted at a vertex with the following properties.
and for every leaf of . In other words, all the leaves as well as the root contain empty bags.
Every non-leaf node of is of one of the following three types:
Introduce node: a node with exactly one child such that for some vertex ; we say that is introduced at .
Forget node: a node with exactly one child such that for some vertex ; we say that is forgotten at .
Join node: a node with two children such that .
An algorithm that transforms in linear time a tree decomposition into a nice one of the same treewidth is presented in .
A parameterized problem is a language , where is a fixed, finite alphabet. For an instance , is called the parameter. A parameterized problem is called fixed parameter tractable (FPT) if there exists an algorithm (called a fixed-parameter algorithm), a computable function , and a constant such that, given , the algorithm correctly decides whether in time bounded by . The complexity class containing all fixed-parameter tractable problems is called FPT.
The classes W, , are classes that contain parameterized problems which presumably do not admit FPT algorithms. Hardness for W can be shown by reducing from a W-hard problem, using a parameterized reduction. Given two parametrized problems , a parameterized reduction from to is an algorithm that, given an instance of , outputs an instance of such that
is a yes-instance of if and only if is a yes-instance of ,
for some computable function , and
the running time of the algorithm is for some computable function .
3 Weighted version
In the following, we prove that SSE and SC when parameterized by the treewidth is fixed-parameter tractable. In Theorem 8 we will extend this result to SC for general . It is noteworthy that due to the well-known result of Courcelle , any problem which is expressible in monadic second order logic (MSO2) can be solved in linear time on bounded-treewidth graphs. Nevertheless, it seems unlikely to give a natural expression of our problems in MSO2.
The problems SSE, for every , and SC parameterized by the treewidth is fixed-parameter tractable. Also, if the input graph has vertices and its tree decomposition of width is given, then the algorithm runs in for SSE and in for SC and uses exponential space to .
First, note that due to Equation (2), SC is a special case of SSE. So, we only prove it for SSE. The proof is based on dynamic programming which computes the values of a table on the nodes of the tree decomposition of the graph in a bottom-up fashion. For convenience and easier analysis, we use a nice tree decomposition.
Suppose that and consider a nice tree decomposition for of width as defined in Section 2. For each node , let be the subtree of rooted at and be the subgraph of induced by the vertices in . For each node , every integer and every subset , define
The value of is defined to be when there is no feasible solution. Now, we consider a table where each row represents a node of (from leaves to the root), and each column represents an integers , and a subset . The value of row and column is equal to . The algorithm examines the nodes of in a bottom-up manner and fills in the table by the following recursions.
In the initialization step, for each leaf ,we have . Therefore,
Let be a forget node with a child , where and . Then, the vertex is either inside or outside the solution . Therefore,
Let be an introduce node with a child , where and . For integer and subset , we have
Let be a join node with children and , where . Then,
Finally, the final solution is equal to , where is the root of . Since the size of the table is at most and the value for each join node is computed in , the runtime of the whole algorithm is at most . Also, for a graph of treewidth , a tree decomposition of width can be found in time . Hence, the problems are in FPT. ∎
Since the treewidth of the graph is bounded by its minimum vertex cover, Theorem 1 implies that SSE and SC with the parameter vertex cover are both in FPT. However, the algorithm uses an exponential space. In the following theorem, when the size of the vertex cover is bounded, we give an alternative algorithm whose runtime is better and uses polynomial space.
The problems SSE, for every , and SC can be solved in respectively and time and uses polynomial space, where and are respectively the order and the minimum vertex cover of the input graph.
Similarly, we only prove it for SSE. Let be a vertex cover of the graph with size and define . Fix a subset and an integer and define
When there is no feasible solution, define to be .
We show that for fixed , the value of can be found in time polynomial in . For every vertex , define and . Also, define . Now, let be such that and . Then,
Finally, we have
The computation of takes time and polynomial space using a simple sorting algorithm. Thus, the algorithm runs in and uses polynomial space. ∎
For every fixed integers and , the problem SC is NP-hard for graphs with vertex cover at most .
Let be a fixed integer. We are going to prove that SC is NP-hard for graphs with minimum vertex cover at most . We give a polynomial reduction from Partition problem which is well-known to be NP-hard .
Input: Positive integers , where .
Query: Does there exist a subset such that ?
Let be an instance of Partition. Let us define . We construct a graph with minimum vertex cover equal to three and a number such that the answer to Partition is yes if and only if .
Let be a fixed integer that will be determined shortly and define to be the bipartite graph with bipartition , where and . Also, all vertices and are adjacent to every vertex in and every vertex is adjacent to , . Moreover, define and the edge weights are as follows.
First, suppose that the answer to Partition problem is yes. Let be such that . Define,
It is easy to check that , for every , e.g.
For the converse suppose that be a -partition of such that , for each . For , define . Since the weight of the edge is equal to , each is completely included in one subset .
If for some , is non-empty, then is also non-empty. For if , then which is a contradiction.
Now, there exists at least three subsets, say , such that for each , is empty. By the above argument, for each , has non-empty intersection with . Therefore, w.l.o.g. we can assume that , . Now, for each , define and . So, is a partition of and for each , we have
Thus, for each , . Now, since , we have , . Also, w.l.o.g. we may assume that and therefore, and . This completes the proof. ∎
For every fixed integers and , the problem SC is NP-hard for the graphs with treewidth at most .
For our next result, we show that -Sparsest Cut remains NP-hard on graphs with maximum degree at most three and also on graphs with degeneracy at most two. The idea is similar to the one in .
For every fixed integer ,
the problem SC is NP-hard for the graphs with maximum degree three, and
the problem SC is NP-hard for the graphs with degeneracy two.
We give a reduction from SC for general graphs which is known to be NP-hard for every fixed integer . Let be an instance of -Sparsest Cut, where is a weighted graph with edge weight , where . We construct a weighted graph of with maximum degree three as follows.
For every , let be an -cycle on vertices () and let . For each edge of , say , create an edge in between vertices and . Also, let . Also, let the weights of the edges in the cycles be a sufficiently large integer (see Figure 2).
It is clear that the construction is polynomial and the obtained graph has maximum degree three. Now, for every -partition of and every , let . Therefore, . Moreover, since the edge weights of cycles are large enough, in every minimizer for , all vertices of each appear in the same part. Hence, . So, the reduction preserve the edge expansion and this proves (i).
In order to prove (ii), replace each edge of with a path of length three such that and and are a sufficiently large integer. Call the obtained graph and it is clear that the degeneracy of is equal to two (since the vertices of degree three induce a stable set). Now, let be as above. If is an edge between and with , then we add to and to to obtain a -partition for . Let , where is the outgoing edges from . Then, it is clear that . Therefore, if and only if . This completes the proof. ∎
Since the problem SC is a special case of the problem SEE, we can deduce the following corollary.
(i) The problem SSE is NP-hard for the graphs with maximum degree three and also for the graphs with degeneracy two.
(ii) The problem SC is W-hard for combined and also combined, where and are respectively the maximum degree and the degeneracy of the input graph.
4 The unweighted version
In this section, we considered the unweighted version of the -Sparsest Cut problem, i.e. when the edge weights are equal to one. First, we present a W-hardness result when the problem is parameterized with the treewidth of the input graph and the number combined. Note that the W-hardness results for the combined parameters imply W-hardness for each parameter separately.
The unweighted SC problem is W-hard when parameterized by the treewidth of the input graph and the number combined.
We give a parameterized reduction from Unary Bin Packing parameterized by the number of bins defined as follows.
Unary Bin Packing
Input: Positive integers each encoded in unary.
Query: Can we partition items with weights into bins such that sum of the weights in each bin does not exceed ?
Jansen et al.  showed that Unary Bin Packing is W-hard when parameterized by the number of bins.
Let us consider an instance of Unary Bin Packing as . Also, let . If , then evidently it is a NO-instance. Without loss of generality, we may assume that , since otherwise, we can add items of weights equal to one. Then, we construct an instance for SC.
For our convenience, first we construct a weighted instance of SC when vertices are weighted. Then, using a unitarization technique given in , we construct an unweighted instance of SC. When is a vertex weight function, the edge expansion of is defined as .
The instance consists of a weighted bipartite graph defined as follows (see Figure 3).
where , is an arbitrary small number such that and is a constant integer that will be determined later. Also, let all edge weights be equal to one. Let and . So, we have an instance of SC.
First, suppose that is a YES-instance for Unary Bin Packing. Then, there is a partition of into bins such that , for each . Now, for each , define . Therefore,