Journal of the ACM (JACM)

Latest Articles

Rumor Spreading and Conductance

In this article, we study the completion time of the PUSH-PULL variant of rumor spreading, also known as randomized broadcast. We show that if a network has n nodes and conductance φ then, with high probability, PUSH-PULL will deliver the message to all nodes in the graph within O(log n/φ) many communication rounds. This bound is best... (more)

Path ORAM: An Extremely Simple Oblivious RAM Protocol

We present Path ORAM, an extremely simple Oblivious RAM protocol with a small amount of client storage. Partly due to its simplicity, Path ORAM is the most practical ORAM scheme known to date with small client storage. We formally prove that Path ORAM has a O(log N) bandwidth cost for blocks of size... (more)

Distributed (Δ +1)-Coloring in Sublogarithmic Rounds

We give a new randomized distributed algorithm for (Δ +1)-coloring in the LOCAL model, running in O(&sqrt; log Δ)+ 2O(&sqrt;log log n)... (more)

Non-Malleable Codes

We introduce the notion of “non-malleable codes” which relaxes the notion of error correction and error detection. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. In contrast to error correction and error detection, non-malleability... (more)

Equivalence of Deterministic Top-Down Tree-to-String Transducers Is Decidable

We prove that equivalence of deterministic top-down tree-to-string transducers is decidable, thus solving a long-standing open problem in formal... (more)

Threesomes, Degenerates, and Love Triangles

The 3SUM problem is to decide, given a set of n real numbers, whether any three sum to zero. It is widely conjectured that a trivial O(n2)-time algorithm is optimal on the Real RAM, and optimal even in the nonuniform linear decision tree model. Over the years the consequences of this conjecture have been revealed. This 3SUM conjecture implies... (more)


About JACM

The Journal of the ACM (JACM) provides coverage of the most significant work on principles of computer science, broadly construed. The scope of research we cover encompasses contributions of lasting value to any area of computer science. To be accepted, a paper must be judged to be truly outstanding in its field.  JACM is interested  in work in core computer science and at the boundaries, both the boundaries of subdisciplines of computer science and the boundaries between computer science and other fields.  READ MORE

Editorial Process

The Journal of the ACM begins the refereeing process with a "quick review", to check whether the manuscript has a plausible chance of meeting JACM's high standards, even if all the claimed results are correct. JACM tries to cover a broad spectrum of areas, and can only accept 4-5 papers in any given area every year. Thus, we try to focus on the most significant papers in each area, that would be of interest to the broad community, and reject many papers that would be accepted by other journals. READ MORE

Important Note on P/NP

Some submissions purport to solve a long-standing open problem in complexity theory, such as the P/NP problem. Many of these turn out to be mistaken, and such submissions tax JACM volunteer editors and reviewers.  READ MORE

Forthcoming Articles
Decremental Single-Source Shortest Paths on Undirected Graphs in Near-Linear Total Update Time

In the decremental single-source shortest paths (SSSP) problem we want to maintain the distances between a given source node s and every other node in an n-node m-edge graph G undergoing edge deletions. While its static counterpart can be solved in near-linear time, this decremental problem is much more challenging even in the undirected unweighted case. In this case, the classic O(mn) total update time of Even and Shiloach [1981] has been the fastest known algorithm for three decades. At the cost of a (1+µ)-approximation factor, the running time was recently improved to n^{2+o(1)} by Bernstein and Roditty [2011]. In this paper, we bring the running time down to near-linear: We give a (1+µ)-approximation algorithm with m^{1+o(1)} total update time, thus obtaining near-linear time. Moreover, we obtain m^{1+o(1)}log W time for the weighted case, where the edge weights are integers from 1 to W. The only prior work on weighted graphs in o(mn) time is the mn^{0.9 + o(1)}-time algorithm by Henzinger et al. [2014,2015] which works for directed graphs with quasi-polynomial edge weights. The expected running time bound of our algorithm holds against an oblivious adversary. In contrast to the previous results which rely on maintaining a sparse emulator, our algorithm relies on maintaining a so-called sparse (h,µ)-hop set introduced by Cohen [2000] in the PRAM literature. An (h,µ)-hop set of a graph G=(V, E) is a set F of weighted edges such that the distance between any pair of nodes in G can be (1+µ)-approximated by their h-hop distance (given by a path containing at most h edges) on G'=(V, E*F). Our algorithm can maintain an (n^{o(1)}, µ)-hop set of near-linear size in near-linear time under edge deletions. It is the first of its kind to the best of our knowledge. To maintain approximate distances using this hop set, we extend the monotone Even-Shiloach tree of Henzinger et al. [2013] and combine it with the bounded-hop SSSP technique of Bernstein [2009; 2013] and Mdry [2010]. These two new tools might be of independent interest.

Minimization of Tree Patterns

Many of today's graph query languages are based on graph pattern matching. We investigate optimization of tree-shaped patterns that have transitive closure operators. Such patterns do not only appear in the context of graph databases but were originally studied for querying tree-structured data, where they can perform child-, descendant-, node label-, and wildcard-tests. The minimization problem aims at reducing the number of nodes in patterns and goes back to the early 2000's. We provide an example showing that, in contrast to earlier claims, tree patterns cannot be minimized by deleting nodes only. The example resolves the M =?= NR problem, which asks if a tree pattern is minimal if and only if it is nonredundant. The example can be adapted to prove that minimization is Pi2P-complete, which resolves another question that was open since the early research on the problem. The latter result shows that, unless NP = Pi2P, more general approaches for minimizing tree patterns are also bound to fail in general.

Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity Learning

We prove that any algorithm for learning parities requires either a memory of quadratic size or an exponential number of samples. This proves a recent conjecture of Steinhardt, Valiant and Wager and shows that for some learning problems a large storage space is crucial. More formally, in the problem of parity learning, an unknown string $x \in \{0,1\}^n$ was chosen uniformly at random. A learner tries to learn $x$ from a stream of samples $(a_1, b_1), (a_2, b_2)... $, where each $a_t$ is uniformly distributed over $\{0,1\}^n$ and $b_t$ is the inner product of $a_t$ and $x$, modulo 2. We show that any algorithm for parity learning, that uses less than $n^2/25$ bits of memory, requires an exponential number of samples. Previously, there was no non-trivial lower bound on the number of samples needed, for any learning problem, even if the allowed memory size is $O(n)$ (where $n$ is the space needed to store one sample). We also give an application of our result in the field of bounded-storage cryptography. We show an encryption scheme that requires a private key of length $n$, as well as time complexity of $n$ per encryption/decryption of each bit, and is provenly and unconditionally secure as long as the attacker uses less than $n^2/25$ memory bits and the scheme is used at most an exponential number of times. Previous works on bounded-storage cryptography assumed that the memory size used by the attacker is at most linear in the time needed for encryption/decryption.

Ontology-Mediated Queries: Combined Complexity and Succinctness of Rewritings via Circuit Complexity

We give solutions to two fundamental computational problems in ontology-based data access with the W3C standard ontology language OWL2QL: the succinctness problem for first-order rewritings of ontology-mediated queries (OMQs), and the complexity problem for OMQ answering. We classify OMQs according to the shape of their conjunctive queries (treewidth, the number of leaves) and the existential depth of their ontologies. For each of these classes, we determine the combined complexity of OMQ answering, and whether all OMQs in the class have polynomial-size first-order, positive existential, and nonrecursive datalog rewritings. We obtain the succinctness results using hypergraph programs, a new computational model for Boolean functions, which makes it possible to connect the size of OMQ rewritings and circuit complexity.

Erratum: Limited-Use Atomic Snapshots with Polylogarithmic Step Complexity

This is an erratum for the paper ``Limited-Use Atomic Snapshots with Polylogarithmic Step Complexity" published in J. ACM 62(1): 3:1-3:22 (2015). The implementation of a $\MaxArray_{k \times h}$ object in Algorithm 2 does not guarantee linearizability. We give here a simple correction to the algorithm and its correctness proof.

Pseudorandomness from Shrinkage

One powerful theme in complexity theory and pseudorandomness in the past few decades has been the use of lower bounds to give pseudorandom generators (PRGs). However, the general results using this hardness vs. randomness paradigm suffer a quantitative loss in parameters, and hence do not give nontrivial implications for models where we dont know super-polynomial lower bounds but do know lower bounds of a fixed polynomial. We show that when such lower bounds are proved using random restrictions, we can construct PRGs which are essentially best possible without in turn improving the lower bounds. More specifically, say that a circuit family has shrinkage exponent “ if a random restriction leaving a p fraction of variables unset shrinks the size of any circuit in the family by a factor of p“+o(1). Our PRG uses a seed of length s1/(“+1)+o(1) to fool circuits in the family of size s. By using this generic construction, we get PRGs with polynomially small error for the following classes of circuits of size s and with the following seed lengths: 1. For de Morgan formulas, seed length s1/3+o(1); 2. For formulas over an arbitrary basis, seed length s1/2+o(1); 3. For read-once de Morgan formulas, seed length s.234...; 4. For branching programs of size s, seed length s1/2+o(1). The previous best PRGs known for these classes used seeds of length bigger than n/2 to output n bits, and worked only for size s = O(n) (Bogdanov, Papakonstantinou, & Wan).

Subcubic Equivalences Between Path, Matrix, and Triangle Problems

We say an algorithm on n x n matrices with integer entries in [-M,M] (or n-node graphs with edge weights from [-M,M]) is truly subcubic if it runs in O(n^{3-´} * polylog M) time for some ´ > 0. We define a notion of subcubic reducibility, and show that many important problems on graphs and matrices solvable in O(n^3) time are equivalent under subcubic reductions. Namely, the following weighted problems either all have truly subcubic algorithms, or none of them do:

  • The all-pairs shortest paths problem on weighted digraphs (APSP).
  • Detecting if a weighted graph has a triangle of negative total edge weight.
  • Listing up to $n^{2.99}$ negative triangles in an edge-weighted graph.
  • Finding a minimum weight cycle in a graph of non-negative edge weights.
  • The replacement paths problem on weighted digraphs.
  • Finding the second shortest simple path between two nodes in a weighted digraph.
  • Checking whether a given matrix defines a metric.
  • Verifying the correctness of a matrix product over the $(\min,+)$-semiring.
  • Finding a maximum subarray in a given matrix.
Therefore, if APSP cannot be solved in n^{3-µ} time for any µ > 0, then many other problems also need essentially cubic time. In fact we show generic equivalences between matrix products over a large class of algebraic structures used in optimization, verifying a matrix product over the same structure, and corresponding triangle detection problems over the structure. These equivalences simplify prior work on subcubic algorithms for all-pairs path problems, since it now suffices to give appropriate subcubic triangle detection algorithms. Other consequences of our work are new combinatorial approaches to Boolean matrix multiplication over the (OR,AND)-semiring (abbreviated as BMM). We show that practical advances in triangle detection would imply practical BMM algorithms, among other results. Building on our techniques, we give two new BMM algorithms: a derandomization of the recent combinatorial BMM algorithm of Bansal and Williams (FOCS'09), and an improved quantum algorithm for BMM.

Shuffles and Circuits (On Lower Bounds for Modern Parallel Computation)

The goal of this paper is to identify fundamental limitations on how efficiently algorithms implemented on platforms such as MapReduce and Hadoop can compute the central problems in the motivating application domains, such as graph connectivity problems. We introduce an abstract model of massively parallel computation, where essentially the only restrictions are that the fan-in of each machine is limited to s bits, where s is smaller than the input size n, and that computation proceeds in synchronized rounds, with no communication between different ma- chines within a round. Lower bounds on the round complexity of a problem in this model apply to every computing platform that shares the most basic design principles of MapReduce-type systems. We prove that computations in our model that use few rounds can be represented as low-degree polynomials over the reals. This connection allows us to translate a lower bound on the (approximate) polynomial degree of a Boolean function to a lower bound on the round complexity of every (randomized) massively parallel computation of that function. These lower bounds apply even in the unbounded width version of our model, where the number of machines can be arbitrarily large. As one example of our general results, computing any non-trivial monotone graph property  such as connectivity  requires a super-constant number of rounds when every machine can accept only a sub-polynomial (in n) number of input bits s. Finally, we prove that, in two senses, our lower bounds are the best one could hope for. For the unbounded-width model, we prove a matching upper bound. Restricting to a polynomial number of machines, we show that asymptotically better lower bounds would separate P from NC1.

Reachability is in DynFO

Patnaik and Immerman introduced the dynamic complexity class DynFO of database queries that can be maintained by first-order dy- namic programs with the help of auxiliary relations under insertions and deletions of edges [34]. This article confirms their conjecture that the Reachability query is in DynFO. As a byproduct it is shown that the rank of a matrix with small values can be maintained in DynFO. It is further shown that the (size of the) maximum matching of a graph can be maintained in non-uniform DynFO, another extension of DynFO, with non-uniform initialisation of the auxiliary relations.

General Belief Revision

In Artificial Intelligence, a key question concerns how an agent may rationally revise its beliefs in light of new information. The standard (AGM) approach to belief revision assumes that the underlying logic contains classical propositional logic. This is a significant limitation, since many representation schemes in AI don't subsume propositional logic. In this paper we consider the question of what the minimal requirements are on a logic, such that the AGM approach to revision may be formulated. We show that AGM-style revision can be obtained even when extremely little is assumed of the underlying language and its semantics; in fact, one requires little more than a language with sentences that are satisfied at models, or possible worlds. The classical AGM postulates are expressed in this framework and a representation result is established between the postulate set and certain preorders on possible worlds. To obtain the representation result, we add a new postulate to the AGM postulates, and we add a constraint to preorders on worlds. Crucially, both of these additions are redundant in the original AGM framework, and so we extend, rather than modify, the AGM approach. As well, iterated revision is addressed and shown to be compatible with our approach. Various examples are given to illustrate the approach, including Horn clause revision, revision in extended logic programs, and belief revision in a very basic logic called literal revision.

Weakest Precondition Reasoning for Expected Runtimes of Randomized Algorithms

This paper presents a wp-style calculus for obtaining bounds on the expected runtime of randomized algorithms. Its application includes determining the (possibly infinite) expected termination time of a randomized algorithm and proving positive almost-sure termination---does a program terminate with probability one in finite expected time? We provide several proof rules for bounding the runtime of loops, and prove the soundness of the approach with respect to a simple operational model. We show that our approach is a conservative extension of Nielson's approach for reasoning about the runtime of deterministic programs. We analyze the expected runtime of some example programs including the coupon collector's problem, a one--dimensional random walk and a randomized binary search.

On algebraic branching programs of small width

In 1979 Valiant showed that the complexity class VP_e of families with polynomially bounded formula size is contained in the class VP_s of families that have algebraic branching programs (ABPs) of polynomially bounded size. Motivated by the problem of separating these classes we study the topological closure VP_e-bar, i.e. the class of polynomials that can be approximated arbitrarily closely by polynomials in VP_e. We describe VP_e-bar with a strikingly simple complete polynomial (in characteristic different from 2) whose recursive definition is similar to the Fibonacci numbers. Further understanding this polynomial seems to be a promising route to new formula lower bounds. Our methods are rooted in the study of ABPs of small constant width. In 1992 Ben-Or and Cleve showed that formula size is polynomially equivalent to width-3 ABP size. We extend their result (in characteristic different from 2) by showing that approximate formula size is polynomially equivalent to approximate width-2 ABP size. This is surprising because in 2011 Allender and Wang gave explicit polynomials that cannot be computed by width-2 ABPs at all! The details of our construction lead to the aforementioned characterization of VP_e-bar. As a natural continuation of this work we prove that the class VNP can be described as the class of families that admit a hypercube summation of polynomially bounded dimension over a product of polynomially many affine linear forms. This gives the first separations of algebraic complexity classes from their nondeterministic analogs.

The Cost of Unknown Diameter in Dynamic Networks

For dynamic networks with {\em unknown diameter}, we prove novel lower bounds on the time complexity of a range of basic distributed computing problems. Together with trivial upper bounds under dynamic networks with {\em known diameter} for these problems, our lower bounds show that the complexities of all these problems are {\em sensitive} to whether the diameter is known to the protocol beforehand: Not knowing the diameter increases the time complexities by a large poly$(N)$ factor as compared to when the diameter is known, resulting in an exponential gap. Our lower bounds are obtained via communication complexity arguments and by reducing from the two-party {\sc DisjointnessCP} problem. We further prove that sometimes this large poly$(N)$ cost can be completely avoided if the protocol is given a good estimate on $N$. In other words, having such an estimate makes some problems no longer sensitive to unknown diameter.

Invited Article Foreword for 65.4

Settling the query complexity of non-adaptive junta testing

We prove that any non-adaptive algorithm that tests whether an unknown Boolean function f:{0,1}n’{0,1} is a k-junta or µ-far from every k-junta must make ©(k3/2/µ) many queries for a wide range of parameters k and µ. Our result dramatically improves previous lower bounds from [BGSMdW13, STW15], and is essentially optimal given Blais's non-adaptive junta tester from [Bla08], which makes O(k3/2)/µ queries. Combined with the adaptive tester of [Bla09] which makes O(k log k + k/µ) queries, our result shows that adaptivity enables polynomial savings in query complexity for junta testing.

Optimal Multi-Way Number Partitioning

The NP-hard number-partitioning problem is to separate a multiset S of n positive integers into k subsets, such that the largest sum of the integers assigned to any subset is minimized. The classic application is scheduling a set of n jobs with different run times on k identical machines such that the makespan, the elapsed time to complete the schedule, is minimized. The two-way number-partitioning decision problem is one of the original 21 problems Richard Karp proved NP-complete. It is also one of Garey and Johnson's six fundamental NP-complete problems, and the only one based on numbers. This article explores algorithms for solving multi-way number-partitioning problems optimally. We explore previous algorithms as well as our own algorithms which fall into three categories: sequential number partitioning (SNP), a branch-and-bound algorithm; binary-search improved bin completion (BSIBC), a bin-packing algorithm; and cached iterative weakening (CIW), an iterative weakening algorithm. We show experimentally that for large random numbers, SNP and CIW are state of the art algorithms depending on the values of n and k. Both algorithms outperform the previous state of the art by up to seven orders of magnitude in terms of run time.

Indistinguishability Obfuscation from Functional Encryption

Indistinguishability obfuscation (IO) is a tremendous notion, powerful enough to give rise to almost any known cryptographic object. Prior candidate IO constructions were based on specific assumptions on algebraic objects called multi-linear graded encodings. We present a generic construction of indistinguishability obfuscation from public-key functional encryption with succinct encryption circuits and subexponential security. This shows the equivalence of indistinguishability obfuscation and public-key functional encryption, a primitive that has so far seemed to be much weaker, lacking the power and the staggering range of applications of indistinguishability obfuscation. Our main construction can be based on functional encryption schemes that support a {\em single function key}, and where the encryption circuit grows sub-linearly in the circuit-size of the function. We further show that sublinear succinctness in circuit-size for single-key schemes can be traded with sublinear succinctness in the number of keys (also known as the {\em collusion-size}) for multi-key schemes. As a consequence, we obtain a new candidate IO construction based on the functional encryption scheme of Garg, Gentry, Halevi and Zhandry (TCC'16) under their assumptions on multi-linear graded encodings. We also show that, under the Learning with Errors assumption, our techniques imply that any indistinguishability obfuscator can be converted into one where the size of obfuscated circuits is twice that of the original circuit plus an additive overhead that is polynomial in its depth, input length, and the security parameter. Our reduction highlights the importance of succinctness in functional encryption schemes, which we hope will serve as a pathway to new IO constructions based on solid cryptographic foundations.

Matroid Secretary Problems

We define a generalization of the classical secretary problem called the matroid secretary problem. In this problem, the elements of a matroid are presented to an online algorithm in uniformly randomorder. When an element arrives, the algorithm observes its value and must make an irrevocable decision whether or not to accept it. The accepted elements must form an independent set, and the objective is to maximize the combined value of these elements.We present an O(log k)-competitive algorithm for general matroids (where k is the rank of the matroid), and constant-competitive algorithms for several special cases including graphic matroids, truncated partition matroids, and bounded degree transversal matroids. We leave as an open question the existence of constant-competitive algorithms for general matroids. Our results have applications in welfare maximizing online mechanism design for domains in which the sets of simultaneously satisfiable agents form a matroid.

Circuit complexity, proof complexity, and polynomial identity testing: The ideal proof system

We introduce a new and natural algebraic proof system, whose complexity measure is essentially the algebraic circuit size of Nullstellensatz certificates. This enables us to exhibit close connections between effective Nullstellensatzë, proof complexity, and (algebraic) circuit complexity. In particular, we show that any super-polynomial lower bound on any Boolean tautology in our proof system implies that the permanent does not have polynomial-size algebraic circuits (VNP ` VP). We also show that super-polynomial lower bounds on the number of lines in Polynomial Calculus proofs imply the Permanent versus Determinant Conjecture. Note that there was no proof system prior to ours for which lower bounds on an arbitrary tautology implied any computational lower bound. Our proof system helps clarify the relationships between previous algebraic proof systems, and begins to shed light on why proof complexity lower bounds for various proof systems have been so much harder than lower bounds on the corresponding circuit classes. In doing so, we highlight the importance of polynomial identity testing (PIT) in proof complexity. In particular, we use PIT to illuminate AC0[p]-Frege lower bounds, which have been open for nearly 30 years, with no satisfactory explanation as to their apparent difficulty. Finally, we explain the obstacles that must be overcome in any attempt to extend techniques from algebraic circuit complexity to prove lower bounds in proof complexity. Using the algebraic structure of our proof system, we propose a novel route to such lower bounds. Although such lower bounds remain elusive, this proposal should be contrasted with the difficulty of extending AC0[p] circuit lower bounds to AC0[p]-Frege lower bounds.

Parallel Metric Tree Embedding based on an Algebraic View on Moore-Bellman-Ford

A metric tree embedding of expected stretch ± e 1 maps a weighted n-node graph G = (V, E, É) to a weighted tree T = (V_T, E_T , É_T) with V † V_T such that, for all v,w  V, dist(v, w, G) d dist(v, w, T) and E[dist(v, w, T)] d ± dist(v, w, G). Such embeddings are highly useful for designing fast approximation algorithms, as many hard problems are easy to solve on tree instances. However, to date the best parallel (polylog n)-depth algorithm that achieves an asymptotically optimal expected stretch of ±  O(log n) requires ©(n^2) work and a metric as input. In this paper, we show how to achieve the same guarantees using polylog n depth and weak-O(m^(1+µ)) work, where m = |E| and µ > 0 is an arbitrarily small constant. Moreover, one may further reduce the work to weak-O(m + n^(1+µ)) at the expense of increasing the expected stretch to O(µ^(-1) log n). Our main tool in deriving these parallel algorithms is an algebraic characterization of a generalization of the classic Moore-Bellman-Ford algorithm. We consider this framework, which subsumes a variety of previous ``Moore-Bellman-Ford-like'' algorithms, to be of independent interest and discuss it in depth. In our tree embedding algorithm, we leverage it for providing efficient query access to an approximate metric that allows sampling the tree using polylog n depth and weak-O(m) work. We illustrate the generality and versatility of our techniques by various examples and a number of additional results. Specifically, we - improve the state of the art for determining metric tree embeddings in the Congest model, - determine a (1 + µ')-approximate metric regarding the distances in a graph G in polylogarithmic depth and weak-O(nm^(1+µ)) work, and - improve upon the state of the art regarding the k-median and the the buy-at-bulk network design problems.

Near-Optimal Lower Bounds on Quantifier Depth and Weisfeiler-Leman Refinement Steps

We prove near-optimal trade-offs for quantifier depth versus number of variables in first-order logic by exhibiting pairs of n-element structures that can be distinguished by a k-variable first-order sentence but where every such sentence requires quantifier depth at least n©(k/logk). Our trade-offs also apply to first-order counting logic, and by the known connection to the k-dimensional Weisfeiler-Leman algorithm imply near-optimal lower bounds on the number of refinement iterations. A key component in our proof is the hardness condensation technique recently introduced by [Razborov '16] in the context of proof complexity. We apply this method to reduce the domain size of relational structures while maintaining the minimal quantifier depth to distinguish them in finite variable logics.

The Parameterized Complexity of k-Biclique Problem

Given a graph $G$ and an integer $k$, the $\kbiclique$ problem asks whether $G$ contains a complete bipartite subgraph with $k$ vertices on its each side. Whether there is an $f(k)\cdot |G|^{O(1)}$-time algorithm solving $\kbiclique$ for some computable function $f$ has been a longstanding open problem. We show that such an algorithm is unlikely to exist under a hypothesis from parameterized complexity theory. To prove this result, we give a reduction that, on input an $n$-vertex graph $G$ and a small integer $k$, constructs a bipartite graph $H=(L\;\dot\cup\; R,E)$ in time polynomial in $n$ such that if $G$ contains a clique with $k$ vertices, then there are $k(k-1)/2$ vertices in $L$ with $n^{\Theta(1/k)}$ common neighbors, otherwise any $k(k-1)/2$ vertices in $L$ have at most $(k+1)!$ common neighbors. An additional feature of this reduction is that it creates a gap on the right side of the biclique. Such a gap might have further applications on proving hardness of approximation results. Assuming a randomized version of Exponential Time Hypothesis, we establish an $f(k)\cdot |G|^{o(\sqrt{k})}$-time lower bound for $\kbiclique$ for any computable function $f$. Combining our result with the work of [Bulatov and Marx 2014],] we obtain a dichotomy classification of the parameterized complexity of cardinality constraint satisfaction problems.

All ACM Journals | See Full Journal Index

Search JACM
enter search term and/or author name