ACM DL

Journal of the ACM (JACM)

Menu
Latest Articles

The Applied Pi Calculus: Mobile Values, New Names, and Secure Communication

We study the interaction of the programming construct “new,” which generates statically scoped names, with communication via messages on channels. This interaction is crucial in security protocols, which are the main motivating examples for our work; it also appears in other programming-language contexts. We define the applied pi... (more)

Constant-Rate Coding for Multiparty Interactive Communication Is Impossible

NEWS

Important Note on P/NP: Some submissions purport to solve a long-standing open problem in complexity theory, such as the P/NP problem. Many of these turn out to be mistaken, and such submissions tax JACM volunteer editors and reviewers. JACM remains open to the possibility of eventual resolution of P/NP and related questions, and continues to welcome submissions on the subject. However, to mitigate the burden of repeated resubmissions due to incremental corrections of errors identified during editorial review, no author may submit more than one such paper to JACM, ACM Trans. on Algorithms, or ACM Trans. on Computation in any 24-month period, except by invitation of the Editor-in-Chief. This applies to resubmissions of previously rejected manuscripts. Please consider this policy before submitting a such a paper.

About JACM

The Journal of the ACM (JACM) provides coverage of the most significant work on principles of computer science, broadly construed. The scope of research we cover encompasses contributions of lasting value to any area of computer science. To be accepted, a paper must be judged to be truly outstanding in its field.  JACM is interested  in work in core computer science and at the boundaries, both the boundaries of subdisciplines of computer science and the boundaries between computer science and other fields.

read more
Forthcoming Articles
Bandits with Knapsacks

Multi-armed bandit problems are the predominant theoretical model of exploration-exploitation tradeoffs in learning, and they have countless applications ranging from medical trials, to communication networks, to Web search and advertising. In many of these application domains the learner may be constrained by one or more supply (or budget) limits, in addition to the customary limitation on the time horizon. The literature lacks a general model encompassing these sorts of problems. We introduce such a model, called "bandits with knapsacks", that combines bandit learning with aspects of stochastic integer programming. In particular, a bandit algorithm needs to solve a stochastic version of the well-known "knapsack problem", which is concerned with packing items into a limited-size knapsack. A distinctive feature of our problem, in comparison to the existing regret-minimization literature, is that the optimal policy for a given latent distribution may significantly outperform the policy that plays the optimal fixed arm. Consequently, achieving sublinear regret in the bandits-with-knapsacks problem is significantly more challenging than in conventional bandit problems. We present two algorithms whose reward is close to the information-theoretic optimum: one is based on a novel ``balanced exploration'' paradigm, while the other is a primal-dual algorithm that uses multiplicative updates. Further, we prove that the regret achieved by both algorithms is optimal up to polylogarithmic factors. We illustrate the generality of the problem by presenting applications in a number of different domains, including electronic commerce, routing, and scheduling. As one example of a concrete application, we consider the problem of dynamic posted pricing with limited supply and obtain the first algorithm whose regret, with respect to the optimal dynamic policy, is sublinear in the supply.

Fair Enough: Guaranteeing Approximate Maximin Shares

We consider the problem of fairly allocating indivisible goods, focusing on a recently-introduced notion of fairness called maximin share guarantee: Each player's value for his allocation should be at least as high as what he can guarantee by dividing the items into as many bundles as there are players and receiving his least desirable bundle. Assuming additive valuation functions, we show that such allocations may not exist, but allocations guaranteeing each player 2/3 of the above value always exist, and can be computed in polynomial time when the number of players is constant. These theoretical results have direct practical implications.

Proving the Incompatibility of Efficiency and Strategyproofness via SMT Solving

Two important requirements when aggregating the preferences of multiple agents are that the outcome should be economically efficient and the aggregation mechanism should not be manipulable. In this paper, we provide a computer-aided proof of a sweeping impossibility using these two conditions for randomized aggregation mechanisms. More precisely, we show that every efficient aggregation mechanism can be manipulated for all expected utility representations of the agents' preferences. This settles an open problem and strengthens a number of existing theorems, including statements that were shown within the special domain of assignment. Our proof is obtained by formulating the claim as a satisfiability problem over predicates from real-valued arithmetic, which is then checked using an SMT (satisfiability modulo theories) solver. In order to verify the correctness of the result, a minimal unsatisfiable set of constraints returned by the SMT solver was then translated back into a proof in higher-order logic, which was automatically verified by an interactive theorem prover. To the best of our knowledge, this is the first application of SMT solvers in computational social choice.

Discrete Temporal Constraint Satisfaction Problems

A discrete temporal constraint satisfaction problem is a constraint satisfaction problem (CSP) whose constraint language consists of relations that are first-order definable over the order of the integers. We prove that every discrete temporal CSP is in P or NP-complete, unless it can be formulated as a finite domain CSP in which case the computational complexity is not known in general.

Full Abstraction for Probabilistic PCF

We present a probabilistic version of PCF, a well-known simply typed universal functional language. The type hierarchy is based on a single ground type of natural numbers. Even if the language is globally call-by-name, we allow a call-by-value evaluation for ground type arguments in order to provide the language with a suitable algorithmic expressiveness. We describe a denotational semantics based on probabilistic coherence spaces, a model of classical Linear Logic developed in previous works. We prove an adequacy and an equational full abstraction theorem showing that equality in the model coincides with a natural notion of observational equivalence.

Excluding Grid Minors and EPTAS

Two of the most widely used approaches to obtain polynomial time approximation schemes (PTASs) on planar graphs are the Lipton-Tarjan separator based approach and Bakers approach. In 2005 Demaine and Hajiaghayi strengthened both approaches using bidimensionality and ob- tained efficient polynomial time approximation schemes (EPTASs) for several problems, includ- ing Connected Dominating Set and Feedback Vertex Set. In this work, we unify the two strengthened approaches to combine the best of both worlds. We develop a framework allowing to design EPTAS on classes of graphs with the subquadratic grid minor (SQGM) property. Roughly speaking, a class of graphs has the SQGM property if for every graph G from the class the fact that G contains no t × t grid as a minor guarantees that the treewidth of G is subquadratic in t. Examples of classes of graphs with the SQGM property are planar graphs, and more generally, graphs excluding some fixed graph as a minor. At the heart of our framework is a decomposition lemma which states that for most bidimensional problems on a graph class G with the SQGM property, there is a polynomial time algorithm which given a graph G  G with as input and an µ > 0, outputs a vertex set X of size µ·OPT such that the treewidth of GX is f(µ). Here, OPT is the objective function value of the problem in question and f is a function depending only on µ. This allows us to obtain EPTASs on (apex)-minor-free graphs for all problems covered by the previous framework, as well as for a wide range of packing problems, partial covering problems and problems that are neither closed under taking minors, nor contractions. To the best of our knowledge for many of these problems including Cycle Packing, F-Packing, F-Deletion, Max Leaf Spanning Tree, or Partial r-Dominating Set no EPTASs even on planar graphs were previously known. We also prove novel grid-excluding theorems in unit disk and map graphs without large cliques. Using these theorems, we show that these classes of graphs have the SQGM property. Based on the developed framework, we design EPTASs and subexponential time parameterized algorithms for various classes of problems on unit disk and map graphs.

Analysing Snapshot Isolation

Snapshot isolation (SI) is a widely used consistency model for transaction processing, implemented by most major databases and some of transactional memory systems. Unfortunately, its classical definition is given in a low-level operational way, by an idealised concurrency-control algorithm, and this complicates reasoning about the behaviour of applications running under SI. We give an alternative specification to SI that characterises it in terms of transactional dependency graphs of Adya et al., generalising serialization graphs. Unlike previous work, our characterisation does not require adding additional information to dependency graphs about start and commit points of transactions. We then exploit our specification to obtain two kinds of static analyses. The first one checks when a set of transactions running under SI can be chopped into smaller pieces without introducing new behaviours, to improve performance. The other analysis checks whether a set of transactions running under a weakening of SI behaves the same as when running under SI.

Characterizing Transactional Memory Consistency Conditions using Observational Refinement

Transactional memory (TM) facilitates the development of concurrent applications by letting a programmer designate certain code blocks as atomic. The common approach to stating TM correctness is through a consistency condition that restricts the possible TM executions. Unfortunately, existing consistency conditions fall short of formalizing the intuitive semantics of atomic blocks through which programmers use a TM. To close this gap, we formalize programmer expectations as observational refinement between TM implementations. This states that properties of a program using a concrete TM implementation can be established by analyzing its behavior with an abstract TM, serving as a specification of the concrete one. We show that a variant of transactional memory specification (TMS), a TM consistency condition, is equivalent to observational refinement for a programming language where local variables are rolled back upon a transaction abort. We thereby establish that TMS is the weakest acceptable condition for this case. We then propose a new consistency condition, called Strong TMS, and show that is equivalent to observational refinement for a language where local variables are not rolled back upon aborts. Finally, we show that, under certain natural assumptions on TM implementations, Strong TMS is equivalent to a variant of a well-known condition of opacity. Our results suggest a new approach to evaluating TM consistency conditions and enable TM implementors and language designers to make better-informed decisions.

Fast Hamiltonicity checking via bases of perfect matchings

For an even integer t >= 2, the Matching Connectivity matrix H_t is a matrix that has rows and columns both labeled by all perfect matchings of the complete graph K_t on t vertices; an entry H_t[M_1, M_2] is 1 if the union of M_1 and M_2 is a Hamiltonian cycle and 0 otherwise. Motivated by the computational study of the Hamiltonicity problem, we present three results on the structure of H_t : We first show that H_t has rank at most 2^{t/21} over GF(2) via an appropriate factorization that explicitly provides families of matchings X_t forming bases for H_t. Second, we show how to quickly change representation between such bases. Third, we notice that the sets of matchings X_t induce permutation matrices within H_t. Subsequently, we use the factorization to obtain an 1.888^n time Monte Carlo algorithm that solves the Hamiltonicity problem in directed bipartite graphs. Our algorithm as well counts the number of Hamiltonian cycles modulo two in directed bipartite or undirected graphs in the same time bound. Moreover, we use the fast basis change algorithm from the second result to present a Monte Carlo algorithm that given an undirected graph on n vertices along with a path decomposition of width at most pw decides Hamiltonicity in (2 + sqrt(2))^pw n^O(1) time. Finally, we use the third result to show that for every > 0 this (2+sqrt(2)) cannot be improved to (2 + sqrt(2) - eps)^pw n^O(1) time unless the Strong Exponential Time Hypothesis fails, i.e., a faster algorithm for this problem would imply the breakthrough result of a (2-eps)^n time algorithm for CNF-Sat for some > 0.

Rumor Spreading and Conductance

Coin Flipping of Any Constant Bias Implies One-Way Functions

We show that the existence of a coin-flipping protocol safe against any non-trivial constant bias (e.g., .499) implies the existence of one-way functions. This improves upon a recent result of Haitner and Omri [FOCS '11], who proved this implication for protocols with bias (2 - 1)/2 - o(1) H .207. Unlike the result of Haitner and Omri, our result also holds for weak coin-flipping protocols.

All ACM Journals | See Full Journal Index

Search JACM
enter search term and/or author name