Seminar of the Talgo team

Tuesday, November 24, 2015: Petra Berenbrink (Simon Fraser University, Canada)

Efficient Plurality Consensus on the Complete Graph

4pm at ENS, 45 rue d'Ulm, salle des Hauts du DI (Staircase A, 3rd floor, Dpt Informatique, at the very end of the corridor).

Abstract:

Plurality consensus considers a network of n nodes, each having one of k opinions. Nodes execute a (randomized) distributed protocol to gain knowledge about opinions in their neighborhood, which might cause them to change their opinion. The goal is to find a protocol that, with high probability, causes all nodes to adopt the plurality (the opinion initially supported by the most nodes). A major open question has been whether there is a protocol for the complete graph that converges in polylogarithmic time and uses only polylogarithmic memory (per node). This paper answers this question affirmative.

We propose two plurality consensus protocols that require only an arbitrarily small constant multiplicative bias towards the plurality. Both assume a complete graph and realize communication via a random phone call model. The first protocol is very simple, and achieves plurality consensus within O(log k · log log n) rounds using log k + Θ(log log k) bits of memory. The second protocol achieves plurality consensus within O(log log n · log n) rounds using only log k + O(1) bits of memory. The latter result disproves a conjecture of Becchetti et al. (SODA 2015) implying that any protocol using log k + O(1) bits of memory has runtime at least linear in k in the worst case. At the heart of our protocols lies the usage of an undecided state, a technique introduced by Angluin et al.

Monday September 15, 2014: Speakers: James R. Lee and David Steurer

Approximate constraint satisfaction requires large LP relaxations

2:30 pm at ENS, 45 rue d'Ulm, salle des Hauts du DI (Staircase A, 3rd floor, Dpt Informatique, at the very end of the corridor).

Abstract:

We prove super-polynomial lower bounds on the size of linear programming relaxations for approximation versions of constraint satisfaction problems. In particular, we show that no polynomial-sized LP can achieve better than a 1/2-approximation for MAX-CUT, a 7/8-approximation for MAX-3SAT, or a 3/4-approximation for MAX-2SAT.

There has been a recent sequence of exciting lower bounds on the size of extended formulations for various polytopes that arise in combinatorial optimization. Unfortunately, these prior results are essentially based on one particular family of "high-complexity" matrices coming from the set disjointness problem. We show that for max-CSP problems, general polynomial-sized LPs are exactly as power as LPs arising from a constant number of rounds of the Sherali-Adams hierarchy. This allows us access to a wealth of non-trivial hard instances.

Joint work with Siu On Chan and Prasad Raghavendra.

Friday, May 23, 2014: Valerie King (University of Victoria, Canada)

Byzantine Agreement with a Strong Adversary in Polynomial Expected Time

10am, at ENS, 45 rue d'Ulm, salle des Hauts du DI (Staircase A, 3rd floor, Dpt Informatique, at the very end of the corridor).

Abstract: Byzantine agreement is a fundamental problem of distributed computing which involves coordination of players when a constant fraction are controlled by a malicious adversary. Each player starts with a bit, and the goal is for all good players to agree on a bit whose value is is equal to some good player's initial bit.

I'll present the first algorithm with polynomial expected time in the asynchronous message-passing model with a strong adversary. A strong adversary in the asynchronous model may corrupt players adaptively, may schedule message delivery with arbitrarily long delays unknown to the players, and has full knowledge of the state of all players.

Ben-Or's 1983 expected exponential time algorithm reduced the problem to a series of iterations in which the players attempt to agree on a global coinflip using their private coinflips. In our algorithm, players use the history of observed coinflips to individually detect statistically deviant behavior by adversarily controlled players, so as to eventually disregard their inputs to arrive at a global coinflip.

No knowledge of distributed computing will be assumed. One of my goals is to give a characterization of the model which is easy for non-experts to reason about.

This is joint work with Jared Saia.

Monday, May 26, 2014: Antoine Vigneron (King Abdullah University of Science and Technology, Saudi Arabia)

Faster algorithms for computing the straight skeleton of a polygon

10:30am, at ENS, 45 rue d'Ulm, salle des Hauts du DI (Staircase A, 3rd floor, Dpt Informatique, at the very end of the corridor).

The straight skeleton of a polygon is defined as the trace of the vertices when the polygon shrinks, each edge moving at the same speed inwards in a perpendicular direction to its orientation. We present a new algorithm for computing the straight skeleton of a polygon. For a non-degenerate input polygon with n vertices, r of which being reflex vertices, the running time of our algorithm is O(n^{4/3+e}+n(log n)log r), for any e>0. It improves on previously known algorithms.




Past seminars

Thursday, March 20, 2014: Jason Hartline (Northwestern University and Harvard University)

Mechanism Design for Data Science

11:15am, at ENS, 45 rue d'Ulm, Salle W (45 rue d'Ulm, main building, staircase B, go to 3rd floor, then follow the signs "Toits du DMA": go up one more level using the other staircase).

The promise of data science is that system data can be analyzed and its understanding can be used to improve the system (i.e., to obtain good outcomes). For this promise to be realized, the necessary understanding must be inferable from the data. Whether or not this understanding is inferable often depends on the system itself. Therefore, the system needs to be designed to both obtain good outcomes and to admit good inference. This talk will explore this issue in a mechanism design context where the designer would like use past bid data to adapt an ction mechanism to optimize revenue.

Data analysis is necessary for revenue optimization in auctions, but revenue optimization is at odds with good inference. The revenue-optimal auction for selling an item is typically parameterized by a reserve price, and the appropriate reserve price depends on how much the bidders are willing to pay. This willingness to pay could be potentially be learned by inference, but a reserve price precludes learning anything about willingness-to-pay of bidders who are not willing to pay the reserve price. The auctioneer could never learn that lowering the reserve price would give a higher revenue (even if it would). To address this impossibility, the auctioneer could sacrifice revenue-optimality in the initial auction to obtain better inference properties so that the auction's parameters can be adapted to changing preferences in the future. In this talk, I will develop a theory for optimal auction design subject to good inference.

Joint work with Shuchi Chawla and Denis Nekipelov.

Tuesday, January 21, 2014: Patrice Ossona de Mendez (CNRS, Centre d'Analyse et de Mathématiques Sociales, Paris et Computer Science Institute of Charles University, Prague)

Low Tree-depth Decompositions (With Applications to Algorithms, Homomorphisms, and Limits)

10:30am, at ENS, 45 rue d'Ulm, salle des Hauts du DI (Staircase A, 3rd floor, Dpt Informatique, at the very end of the corridor).

The tree-depth of a graph G is the minimum height of a forest F such that every edge of G connects vertices that have an ancestor-descendant relationship in F. This parameter is minor-monotone and it is closely related to other measures in combinatorics such as vertex ranking number, ordered chromatic number, and minimum elimination tree height; it is also closely related to the cycle rank of directed graphs, the star height of regular languages, and the quantifier rank of formulas.

For non-negative integer p, a low tree-depth decomposition with parameter p of a graph G is a partition V_1 ,...,V_k of its vertex set such that every i <= p parts induce a subgraph with tree-depth at most i. The minimum number of parts k for which a low tree-depth decomposition with parameter p of G exists is chi_p (G). This minor-monotone graph invariant is related to the densities of the shallow minors (topological minors, or immersions) of the graph G.

In this talk, we survey several theoretical and algorithmic applications of low-tree depth decompositions --- like distance coloring, taxonomy of graph classes, restricted homomorphism dualities, and first-order model-checking in classes of sparse graphs, as well as connections with recent construction of explicit limit objects for trees and graphs with bounded tree-depth.

This is joint work with Jaroslav Nesetril.

Friday, June 28, 2013: Cristopher Moore (Santa Fe Institute)

Epsilon-biased sets with just a little randomness

10am, at ENS, 45 rue d'Ulm, salle des Hauts du DI (Staircase A, 3rd floor, Dpt Informatique, at the very end of the corridor).

Subsets of F_2^n that are epsilon-biased, meaning that the parity of any set of bits is even or odd with probability epsilon close to 1/2, are useful tools in derandomization. They also correspond to optimal error-correcting codes, i.e. meeting the Gilbert-Varshamov bound, with distance close to n/2.

A simple randomized construction shows that such sets exist of size O(n/eps^2); recently, Ben-Aroya and Ta-Shma gave a deterministic construction of size O((n/eps^2)^(5/4)). I will review deterministic constructions of Alon, Goldreich, Haastad, and Peralta of sets of size O(n/eps^3) and O(n^2/eps^2), and discuss the delightful pseudorandom properties of the Legendre symbol along the way.

Then, rather than derandomizing these sets completely in exchange for making them larger, we will try moving in a different direction on the size-randomness plane, constructing sets of optimal size O(n/eps^2) with as few random bits as possible. The naive randomized construction requires O(n^2/eps^2) random bits. I will show that this can be reduced to O(n log(n/eps)) random bits. Like Alon et al., our construction uses the Legendre symbol and Weil sums, but in a different way to control high moments of the bias. I'll end by saying a few words about Ramsey graphs and random polynomials.

Thursday, April 25, 2013: Ioannis Milis (Athens)

Speed scaling scheduling and convex programming

11am, at ENS, 45 rue d'Ulm, salle des Hauts du DI (Staircase A, 3rd floor, Dpt Informatique, at the very end of the corridor).

We consider scheduling problems on speed-scalable processors so as to minimize the energy consumption. Especially, we focus on designing efficient algorithms for this class of problems by formulating them as convex programs. We introduce this approach via the classical YDS algorithm [Yao, Demers and Senker; FOCS 1995] and its optimality proof by using the Karush-Kuhn-Tucker (KKT) conditions for convex programs [Bansal, Kimprel, Pruhs; FOCS 2004; JACM 2007]. Then we consider an Open-Shop scheduling problem in the speed-scaling setting and we present an optimal algorithm by applying the primal-dual method in the context of convex programming.

Friday, December 7, 2012: Lionel Pournin (EFREI)

The diameters of associahedra

10:30am, at ENS, 45 rue d'Ulm, salle des Hauts du DI (Staircase A, 3rd floor, Dpt Informatique, at the very end of the corridor).

Abstract: Associahedra are polytopes that were independently discovered within different areas of mathematics such as algebraic topology, discrete geometry, and combinatorial optimization. The boundary complex of the d-dimensional associahedron can be indifferently represented using bracketed words of d+2 letters, triangulations of convex polygons with d+3 vertices, or size d+1 binary trees.

In 1988, Daniel Sleator, Robert Tarjan, and William Thurston have shown that the diameter of this polytope is at most 2d-4 when d is larger than 9 and that this bound holds with equality when d is large enough. They further conjecture that "large enough" means "greater than 9".

The recently announced proof of this conjecture will be sketched in this talk. As a preliminary, the associahedra will be described as well as a selection of the many problems these polytopes are related to.

Friday, November 30, 2012: Nabil Mustafa (ESIEE)

Local-search techniques for geometric optimization

10:30am, at ENS, 45 rue d'Ulm, salle des Hauts du DI (Staircase A, 3rd floor, Dpt Informatique, at the very end of the corridor).

Abstract: Local-search is an intuitive approach towards solving combinatorial optimization problems: start with any feasible solution, and try to improve it by local improvement steps. Like other greedy approaches, it can fail to find the global optimum by getting stuck on a local maximum.

There have been a number of papers in the past few years which show the utility of local-search for geometric problems. I will survey the ideas and techniques behind them. In particular, I will describe a simple local-search (1+eps)-approximation algorithm for the geometric set cover problem (by extension, for computing minimum hitting sets) for a wide-range of geometric objects. For example, this gives the first PTAS for the problem of covering a given set of points by a minimum-sized subset of disks from a given set of disks in the plane. The main technique is to construct an auxiliary planar graph on the geometric objects, and then follow the effect of local-search on this graph.

Joint work with Saurabh Ray. Slides of the talk

Friday, November 16, 2012: Uli Wagner (EPFL, Lausanne)

Computing and extending continuous maps: polynomiality and undecidability

10:30am, at ENS, 45 rue d'Ulm, salle des Hauts du DI (Staircase A, 3rd floor, Dpt Informatique, at the very end of the corridor).

Abstract: We consider several basic problems in algebraic topology from a point of view of computational complexity.

A central theme in algebraic topology is to understand the structure of all (continuous) maps from X to Y, for given topological spaces X and Y. For topological purposes, maps are usually considered equivalent if they are homotopic, i.e., if they can be continuously deformed into one another. Thus, the object of interest is [X,Y], the set of homotopy classes of maps from X to Y. A special case of this homotopy classification problem are the higher homotopy groups \pi_k(Y) of a space Y which, roughly speaking can be identified with [S^d,Y].

A closely related question is the extension problem, which asks whether a map f from a subspace A of X into Y can be extended (continuously) over all of X. For example, the famous Brouwer Fixed-Point Theorem states that the identity map from the k-sphere to itself cannot be extended to the (k+1)-dimensional disk.

In full generality, the above problems are undecidable, as follows from classical undecidability results by Adyian--Rabin and Novikov concerning the fundamental group \pi_1(Y), i.e., concerning homotopy classification and extendability for maps from the circle S^1 into Y.

Thus, we study the problem under the assumption that the target Y is (k-1)-connected for some k\geq 2, i.e., that the first (k-1) homotopy groups of Y vanish; informally, this means that "Y has no holes up to dimension k-1" (a basic example of such a Y is the sphere S^k).

We show that, under this assumption, [X,Y] can be computed if the dimension of X is at most 2k-2, and the extension problem can be solved if the dimension of X is at most 2k-1. Moreover, if k is fixed, the algorithms run in polynomial time.

On the other, we show that, for \dim X=2k, the extension problem becomes undecidable, even for fixed k.

We also discuss related results concerning homotopy groups.

This is joint work with M. Cadek, M. Krcal, J. Matousek, F. Sergeraert, and L. Vokrinek.

Friday, October 12, 2012: Alfredo Hubard (post-doc in the Talgo team, École normale supérieure and Fondation Sciences Mathématiques de Paris)

Topology and Ramsey theory of families of convex bodies

10:30am, at ENS, 45 rue d'Ulm, salle des Hauts du DI (Staircase A, 3rd floor, Dpt Informatique, at the very end of the corridor).

The classical Erdos-Szekeres theorem claims that there exists a function M_points(n), such that among any M points in the plane in general position, at least n of them are in convex position (i.e. conv (F) \neq conv (F-x) for all x \in F).

In the 80's Goodman and Pollack generalized the theorem to the realm of pseudoline arrangements and made a conjecture on the growth of the analogous Ramsey function M_{pseudo lines}. Still in the 80's Bisztriczky and Fejes Toth generalized this theorem to the realm of families of disjoint convex bodies and made a conjecture on the growth of the analogous Ramsey function M_{disjoint}. Then around twelve years ago Pach and Toth generalized this theorem to the realm of non crossing convex bodies and conjectured that the theorem holds for families of k-crossing convex bodies. The upper bounds on M_{disjoint} and M_{non crossing} were improved a couple of times in the last 20 years.

Using the concept of order type, we show that M_{disjoint} \leq M_{pseudo lines}=M_{noncrossing} and improve all the known upper bounds for these functions. We also prove the Pach-Toth conjecture.

The concept of order type generalizes the well known concept for point sets in euclidean space. I will briefly discuss connections to oriented matroid theory. In a celebrated result, Mnev constructed order types with realization spaces with arbitrary homotopy type. We begin the study of realization spaces of order types by endowing families of convex bodies with the Hausdorff distance. Our results in this direction generalize theorems of Levi and Mnev.

If time permits, I will discuss yet another generalization of the Erdos-Szekeres theorem and a connection to the big circle conjecture of Angel, Holroyd, Romik and Virag via the metric geometry of realization spaces.

These results are joint work with Andreas Holmsen and Michael Dobbins.

Wednesday, May 30, 2012: Stephen Kobourov (U. of Arizona and U. of Tübingen)

Proportional Contact Representation of Planar Graphs

2pm, at ENS, 45 rue d'Ulm, salle des Hauts du DI (Staircase A, 3rd floor, Dpt Informatique, at the very end of the corridor).

In a proportional contact representation of a planar graph, each vertex is represented by a simple polygon with area proportional to a given weight, and edges are represented by adjacencies between the corresponding pairs of polygons. We describe a new algorithm that guarantees 10-sided rectilinear polygons and runs in linear time.

We also describe a construction with 8-sided polygons, which is optimal in terms of polygonal complexity as 8-sided polygons are sometimes necessary. Specifically, we show how to compute the combinatorial structure and how to refine it into an area-universal rectangular layout in linear time. However, to compute the exact representation from the area-universal layout required numerical iteration, or can be approximated with a hill-climbing heuristic.

Thursday, March 15, 2012: Jeff Erickson (U. of Illinois at Urbana-Champaign, USA, and IST, Vienna, Austria) and Arnaud de Mesmay (ENS, Paris)

Room Pasteur. Enter the 45, rue d'Ulm through the lodge, then go immediately to your right. Enter the door labeled "Pavillon Pasteur", in the Philosophy Department, walk in the corridor, find the stairs to your right, go one level up.

At 2pm: Jeff Erickson, Topological hexahedral meshing of multiply-connected domains

In the mid-1990s, Thurston and Mitchell proved that any quadrilateral mesh of the sphere with an even number of faces can be extended to a topological hexahedral mesh of the ball. I will prove the following natural generalization of Thurston and Mitchell's result to higher-genus surfaces, which surprisingly seems to be new. A quadrilateral mesh of any closed surface embedded in R^3 can be extended to a compatible hexahedral mesh of the interior if and only if (1) the surface mesh has an even number of faces, and (2) the system of curves dual to the surface mesh has trivial Z_2-homology in the interior body.

Around 3:15pm: Arnaud de Mesmay, Testing graph isotopy on surfaces

In the study of graphs embedded on surfaces, the notion of graph isotopy is a natural variation of the well known homotopy of paths or cycles: We say that two graphs are isotopic if there exists a continous deformation between them. In this work, we investigate the following problem : Given two embeddings of the same abstract graph on a surface, decide whether they are isotopic.

We provide efficient algorithms to solve this problem in two models. In the first one the embeddings are described by their arrangements with a cellularly embedded graph on the surface; in the second one they are piecewise-linear embeddings in the plane minus a finite set of points. The main tools we use are the existing algorithms to test homotopy, which we use as a subroutine, and a surgical approach inspired by the theory of mapping class groups. As a by-product, we reprove the following mathematical characterization, first observed by Ladegaillerie: Two graph embeddings are isotopic if and only if they are homotopic and congruent by an oriented homeomorphism.

This is joint work with Éric Colin de Verdière, to appear at SoCG 2012.

Thursday, January 5, 2012: Thomas Sauerwald (MPI Saarbrücken, Germany)

Ultra-Fast Rumor Spreading on Social Networks

11am, at ENS, 45 rue d'Ulm, room U/V (Aile Rataud, 2nd basement floor, see map).

We analyze the popular push-pull protocol for spreading rumors in networks. Initially, a single node knows of a rumor. In each succeeding round, every node chooses a random neighbor, and the two nodes share the rumor if one of them is already aware of it. We present the first theoretical analysis of this protocol on random graphs that have a power law degree distribution with an arbitrary exponent beta > 2.

Our main findings reveal a dichotomy in the performance of the protocol that depends on the exponent of the power law. More specifically, we show that if 2 < beta < 3, then the rumor spreads to almost all nodes in Theta(log log n) rounds with high probability. This is exponentially faster than all previously known upper bounds for the push-pull protocol established for various classes of networks. On the other hand, if beta > 3, then Omega(log n) rounds are necessary.

This is joint work with Nikolaos Fountoulakis and Konstantinos Panagiotou.

Tuesday, December 13, 2011: Menelaos Karavelas (U. of Crete, Greece)

The maximum number of faces of the Minkowski sum of two convex polytopes

2pm, at ENS, 45 rue d'Ulm, salle des Hauts du DI (Staircase A, 3rd floor, Dpt Informatique, at the very end of the corridor).

Abstract: We derive tight expressions for the maximum values of the number of $k$-faces, $0\le k\le d-1$, of the Minkowski sum $P_1\oplus P_2$ of two $d$-dimensional convex polytopes $P_1$ and $P_2$, as a function of the number of vertices of the polytopes.

For even dimensions $d\ge 2$, the maximum values are attained if $P_1$ and $P_2$ are cyclic $d$-polytopes with disjoint vertex sets. For odd dimensions $d\ge 3$, the maximum values are attained if $P_1$ and $P_2$ are $\lfloor d/2\rfloor$-neighborly $d$-polytopes, whose vertex sets are chosen appropriately from two distinct $d$-dimensional moment-like curves.

This is joint work with Eleni Tzanaki.

References:

Thursday, October 20, 2011: Jeff Erickson (U. of Illinois at Urbana-Champaign, USA, and IST, Vienna, Austria)

Tracing Curves on Triangulated Surfaces

at 2pm, at ENS, 45 rue d'Ulm, salle des Hauts du DI. Access modified: Enter the main building through the main entrance; in the hall ("Aquarium"), take the staircase on the left ("Headmistress' staircase"), 3rd floor, door to the right.

Abstract: Let S be a triangulated surface, possibly with boundary, composed of n triangles. A simple path or cycle in S is normal if it avoids every vertex of S and it does not cross any edge of S twice in a row. We describe a simple algorithm to trace any normal curve in S in O(n^2\log X) time, where X is the total number of times the curve crosses edges of S. In particular, our algorithm runs in polynomial time even when the number of crossings is exponential in n. Our tracing algorithm computes a new cellular decomposition of S with complexity O(n); the traced curve appears as a simple path or cycle in the 1-skeleton of the new decomposition.

Our tracing strategy implies several fast algorithms for (multi)curves represented by normal coordinates. For example, we can count the components of a normal curve, determine whether two disjoint normal curves are isotopic, or determine whether a normal curve disconnects two given points on the surface, all in O(n^2\log X) time. Our algorithms are competitive with and conceptually simpler than earlier algorithms by Schaefer, Sedgwick, and Stefankovic [COCOON 2002] based on word equations and text compression.

Our algorithm can also be used to trace simple geodesic paths in piecewise-linear surfaces, represented by a set of Euclidean triangles with pairs of equal-length edges identified. For example, given a starting point and a direction, in the local coordinate system of one of the triangles, we can locate the first point of self-intersection of the resulting geodesic path in O(n^2\log X) time.

This is unpublished joint work with my PhD student Amir Nayyeri. The talk will be self-contained; no prior exposure to computational topology is necessary.

Thursday, June 16, 2011: Benjamin A. Burton (U. Queensland, Brisbane, Australia)

Unknot recognition, linear programming and the elusive polynomial time algorithm

at 11am, at ENS, 45 rue d'Ulm, salle des Hauts du DI (Staircase A, 3rd floor, Dpt Informatique, at the very end of the corridor)

Abstract: What do practical tools such as integer and linear programming have to do with theoretical problems such as unknot recognition and the Poincaré conjecture? In this talk we explore new approaches to old and difficult computational problems from geometry and topology: how to tell whether a loop of string is knotted, or whether a 3-dimensional space has no interesting topological features. Although the best known algorithms for these problems run in exponential time, there is increasing evidence that a polynomial time solution might be possible. We outline several promising approaches in which linear programming and integer programming play a starring role.

Slides of the talk.
 
Webmaster: webdi[@]di[.]ens[.]fr.