ACM DL

Journal of the ACM (JACM)

Menu
Latest Articles

Estimating the Unseen: Improved Estimators for Entropy and Other Properties

We show that a class of statistical properties of distributions, which includes such practically relevant properties as entropy, the number of distinct elements, and distance metrics between pairs of distributions, can be estimated given a sublinear sized sample. Specifically, given a sample consisting of independent draws from any distribution... (more)

Polynomial Time Corresponds to Solutions of Polynomial Ordinary Differential Equations of Polynomial Length

The outcomes of this article are twofold. Implicit complexity. We provide an implicit... (more)

The Matching Polytope has Exponential Extension Complexity

A popular method in combinatorial optimization is to express polytopes P, which may potentially have exponentially many facets, as solutions of linear... (more)

Communication Steps for Parallel Query Processing

We study the problem of computing conjunctive queries over large databases on parallel architectures without shared storage. Using the structure of... (more)

NEWS

Important Note on P/NP: Some submissions purport to solve a long-standing open problem in complexity theory, such as the P/NP problem. Many of these turn out to be mistaken, and such submissions tax JACM volunteer editors and reviewers. JACM remains open to the possibility of eventual resolution of P/NP and related questions, and continues to welcome submissions on the subject. However, to mitigate the burden of repeated resubmissions due to incremental corrections of errors identified during editorial review, no author may submit more than one such paper to JACM, ACM Trans. on Algorithms, or ACM Trans. on Computation in any 24-month period, except by invitation of the Editor-in-Chief. This applies to resubmissions of previously rejected manuscripts. Please consider this policy before submitting a such a paper.

About JACM

The Journal of the ACM (JACM) provides coverage of the most significant work on principles of computer science, broadly construed. The scope of research we cover encompasses contributions of lasting value to any area of computer science. To be accepted, a paper must be judged to be truly outstanding in its field.  JACM is interested  in work in core computer science and at the boundaries, both the boundaries of subdisciplines of computer science and the boundaries between computer science and other fields.

read more
Forthcoming Articles
Fair Enough: Guaranteeing Approximate Maximin Shares

We consider the problem of fairly allocating indivisible goods, focusing on a recently-introduced notion of fairness called maximin share guarantee: Each player's value for his allocation should be at least as high as what he can guarantee by dividing the items into as many bundles as there are players and receiving his least desirable bundle. Assuming additive valuation functions, we show that such allocations may not exist, but allocations guaranteeing each player 2/3 of the above value always exist, and can be computed in polynomial time when the number of players is constant. These theoretical results have direct practical implications.

Proving the Incompatibility of Efficiency and Strategyproofness via SMT Solving

Two important requirements when aggregating the preferences of multiple agents are that the outcome should be economically efficient and the aggregation mechanism should not be manipulable. In this paper, we provide a computer-aided proof of a sweeping impossibility using these two conditions for randomized aggregation mechanisms. More precisely, we show that every efficient aggregation mechanism can be manipulated for all expected utility representations of the agents' preferences. This settles an open problem and strengthens a number of existing theorems, including statements that were shown within the special domain of assignment. Our proof is obtained by formulating the claim as a satisfiability problem over predicates from real-valued arithmetic, which is then checked using an SMT (satisfiability modulo theories) solver. In order to verify the correctness of the result, a minimal unsatisfiable set of constraints returned by the SMT solver was then translated back into a proof in higher-order logic, which was automatically verified by an interactive theorem prover. To the best of our knowledge, this is the first application of SMT solvers in computational social choice.

Which Is the Fairest (Rent Division) of Them All?

What is a fair way to assign rooms to several housemates, and divide the rent between them? This is not just a theoretical question: many people have used the Spliddit website to obtain envy-free solutions to rent division instances. But envy freeness, in and of itself, is insufficient to guarantee outcomes that people view as intuitive and acceptable. We therefore focus on solutions that optimize a criterion of social justice, subject to the envy freeness constraint, in order to pinpoint the "fairest" solutions. We develop a general algorithmic framework that enables the computation of such solutions in polynomial time. We then study the relations between natural optimization objectives, and identify the maximin solution, which maximizes the minimum utility subject to envy freeness, as the most attractive. We demonstrate, in theory and using experiments on real data from Spliddit, that the maximin solution gives rise to significant gains in terms of our optimization objectives. Finally, a user study with Spliddit users as subjects demonstrates that people find the maximin solution to be significantly fairer than arbitrary envy-free solutions; this user study is unprecedented in that it asks people about their real-world rent division instances. Based on these results, the maximin solution has been deployed on Spliddit since April 2015.

Competitive Algorithms from Competitive Equilibria: Non-Clairvoyant Scheduling under Polyhedral Constraints

We introduce and study a general scheduling problem that we term the Polytope Scheduling problem (PSP). In this problem, jobs can have different arrival times and sizes; and the rates assigned by the scheduler to the jobs are subject to arbitrary packing constraints. The PSP framework captures a variety of scheduling problems, including the classical problems of unrelated machines scheduling, broadcast scheduling, and scheduling jobs of different parallelizability. It also captures scheduling constraints arising in diverse modern environments ranging from individual computer architectures to data centers. More concretely, PSP models multidimensional resource requirements and parallelizability, as well as network bandwidth requirements found in data center scheduling. We show a surprising result -- there is a single algorithm that is $O(1)$ competitive for all PSP instances when the objective is total completion time, and $O(1)$ competitive for a large sub-class of PSP instances when the objective is total flow time. This algorithm simply uses the well-known Proportional Fairness algorithm (PF) to perform allocations each time instant. Though PF has been extensively studied in the context of maximizing fairness in resource allocation, we present the first analysis in adversarial and general settings for optimizing job latency. Further, PF is non-clairvoyant, meaning that the algorithm doesn't need to know jobs sizes until their completion. We establish our positive results by making novel connections with Economics, in particular the notions of market clearing, Gross Substitutes, and Eisenberg Gale markets. We complement these positive results with a negative result: We show that for the total flow time objective, any non-clairvoyant algorithm for PSP has a strong lower bound on the competitive ratios unless given a poly-logarithmic speed augmentation. This motivates the need to consider sub-classes of PSP when studying flow time. The sub-class for which we obtain positive results not only captures several well-studied models such as scheduling with speedup curves and related machine scheduling, but also captures as special cases hitherto unstudied scheduling problems such as single source flow routing, routing multicast (video-on-demand) trees, and resource allocation with substitute resources.

Discrete Temporal Constraint Satisfaction Problems

A discrete temporal constraint satisfaction problem is a constraint satisfaction problem (CSP) whose constraint language consists of relations that are first-order definable over the order of the integers. We prove that every discrete temporal CSP is in P or NP-complete, unless it can be formulated as a finite domain CSP in which case the computational complexity is not known in general.

Invited Articles Foreword for 65:1

Constant-rate coding for multiparty interactive communication is impossible

We study coding schemes for multiparty interactive communication over synchronous networks that suffer from stochastic noise, where each bit is independently flipped with probability µ. We analyze the minimal overhead that must be added by the coding scheme in order to succeed in performing the computation despite the noise. Our main result is a lower bound on the communication of any noise-resilient protocol over a synchronous star network with n-parties (where all parties communicate in every round). Specifically, we show a task that can be solved by communicating T bits over the noise-free network, but for which any protocol with success probability of 1  o(1) must communicate at least ©(T log n / log log n ) bits when the channels are noisy. By a 1994 result of Rajagopalan and Schulman, the slowdown we prove is the highest one can obtain on any topology, up to a log log n factor. We complete our lower bound with a matching coding scheme that achieves the same overhead; thus, the capacity of (synchronous) star networks is ˜(log log n / log n). Our bounds prove that, despite several previous coding schemes with rate ©(1) for certain topologies, no coding scheme with constant rate ©(1) exists for arbitrary n-party noisy networks.

Excluding Grid Minors and EPTAS

Two of the most widely used approaches to obtain polynomial time approximation schemes (PTASs) on planar graphs are the Lipton-Tarjan separator based approach and Bakers approach. In 2005 Demaine and Hajiaghayi strengthened both approaches using bidimensionality and ob- tained efficient polynomial time approximation schemes (EPTASs) for several problems, includ- ing Connected Dominating Set and Feedback Vertex Set. In this work, we unify the two strengthened approaches to combine the best of both worlds. We develop a framework allowing to design EPTAS on classes of graphs with the subquadratic grid minor (SQGM) property. Roughly speaking, a class of graphs has the SQGM property if for every graph G from the class the fact that G contains no t × t grid as a minor guarantees that the treewidth of G is subquadratic in t. Examples of classes of graphs with the SQGM property are planar graphs, and more generally, graphs excluding some fixed graph as a minor. At the heart of our framework is a decomposition lemma which states that for most bidimensional problems on a graph class G with the SQGM property, there is a polynomial time algorithm which given a graph G  G with as input and an µ > 0, outputs a vertex set X of size µ·OPT such that the treewidth of GX is f(µ). Here, OPT is the objective function value of the problem in question and f is a function depending only on µ. This allows us to obtain EPTASs on (apex)-minor-free graphs for all problems covered by the previous framework, as well as for a wide range of packing problems, partial covering problems and problems that are neither closed under taking minors, nor contractions. To the best of our knowledge for many of these problems including Cycle Packing, F-Packing, F-Deletion, Max Leaf Spanning Tree, or Partial r-Dominating Set no EPTASs even on planar graphs were previously known. We also prove novel grid-excluding theorems in unit disk and map graphs without large cliques. Using these theorems, we show that these classes of graphs have the SQGM property. Based on the developed framework, we design EPTASs and subexponential time parameterized algorithms for various classes of problems on unit disk and map graphs.

Analysing Snapshot Isolation

Snapshot isolation (SI) is a widely used consistency model for transaction processing, implemented by most major databases and some of transactional memory systems. Unfortunately, its classical definition is given in a low-level operational way, by an idealised concurrency-control algorithm, and this complicates reasoning about the behaviour of applications running under SI. We give an alternative specification to SI that characterises it in terms of transactional dependency graphs of Adya et al., generalising serialization graphs. Unlike previous work, our characterisation does not require adding additional information to dependency graphs about start and commit points of transactions. We then exploit our specification to obtain two kinds of static analyses. The first one checks when a set of transactions running under SI can be chopped into smaller pieces without introducing new behaviours, to improve performance. The other analysis checks whether a set of transactions running under a weakening of SI behaves the same as when running under SI.

Characterizing Transactional Memory Consistency Conditions using Observational Refinement

Transactional memory (TM) facilitates the development of concurrent applications by letting a programmer designate certain code blocks as atomic. The common approach to stating TM correctness is through a consistency condition that restricts the possible TM executions. Unfortunately, existing consistency conditions fall short of formalizing the intuitive semantics of atomic blocks through which programmers use a TM. To close this gap, we formalize programmer expectations as observational refinement between TM implementations. This states that properties of a program using a concrete TM implementation can be established by analyzing its behavior with an abstract TM, serving as a specification of the concrete one. We show that a variant of transactional memory specification (TMS), a TM consistency condition, is equivalent to observational refinement for a programming language where local variables are rolled back upon a transaction abort. We thereby establish that TMS is the weakest acceptable condition for this case. We then propose a new consistency condition, called Strong TMS, and show that is equivalent to observational refinement for a language where local variables are not rolled back upon aborts. Finally, we show that, under certain natural assumptions on TM implementations, Strong TMS is equivalent to a variant of a well-known condition of opacity. Our results suggest a new approach to evaluating TM consistency conditions and enable TM implementors and language designers to make better-informed decisions.

Fast Hamiltonicity checking via bases of perfect matchings

For an even integer t >= 2, the Matching Connectivity matrix H_t is a matrix that has rows and columns both labeled by all perfect matchings of the complete graph K_t on t vertices; an entry H_t[M_1, M_2] is 1 if the union of M_1 and M_2 is a Hamiltonian cycle and 0 otherwise. Motivated by the computational study of the Hamiltonicity problem, we present three results on the structure of H_t : We first show that H_t has rank at most 2^{t/21} over GF(2) via an appropriate factorization that explicitly provides families of matchings X_t forming bases for H_t. Second, we show how to quickly change representation between such bases. Third, we notice that the sets of matchings X_t induce permutation matrices within H_t. Subsequently, we use the factorization to obtain an 1.888^n time Monte Carlo algorithm that solves the Hamiltonicity problem in directed bipartite graphs. Our algorithm as well counts the number of Hamiltonian cycles modulo two in directed bipartite or undirected graphs in the same time bound. Moreover, we use the fast basis change algorithm from the second result to present a Monte Carlo algorithm that given an undirected graph on n vertices along with a path decomposition of width at most pw decides Hamiltonicity in (2 + sqrt(2))^pw n^O(1) time. Finally, we use the third result to show that for every > 0 this (2+sqrt(2)) cannot be improved to (2 + sqrt(2) - eps)^pw n^O(1) time unless the Strong Exponential Time Hypothesis fails, i.e., a faster algorithm for this problem would imply the breakthrough result of a (2-eps)^n time algorithm for CNF-Sat for some > 0.

All ACM Journals | See Full Journal Index

Search JACM
enter search term and/or author name