TCS+

A carbon-free dissemination of ideas across the globe.

TCS+ talk: Wednesday, March 1st — Josh Alman, MIT

Our next talk will take place on Wednesday, March 1st at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Josh Alman (MIT) will present joint work with Ryan Williams on “Probabilistic Rank and Matrix Rigidity”, to appear in the next STOC  (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

A matrix M is called rigid if M has ‘high’ rank, and one needs to change ‘many’ entries of M before it has ‘low’ rank. Matrix rigidity was introduced by Leslie Valiant in 1977, as a path towards proving arithmetic circuit lower bounds. In order to prove such a bound, one needs to give an explicit rigid matrix (for certain appropriate rigidity parameters). The problem of finding such an explicit rigid matrix is still open.

One family of matrices which was conjectured to be rigid is the Walsh-Hadamard transform (a.k.a. Sylvester matrices, a.k.a. the communication matrix of Inner Product modulo 2). I will present a proof that the Walsh-Hadamard transform is, in fact, not sufficiently rigid for Valiant’s program. Our proof uses a connection from recent “polynomial method” algorithms between low rank matrices and low degree polynomials. I will also discuss a natural notion we call the probabilistic rank of a matrix, which measures the extent to which a matrix can be probabilistically approximated by low rank matrices.

Based on joint work with Ryan Williams to appear in STOC’17.

TCS+ talk: Wednesday, February 15th — Jelani Nelson, Harvard

Our next talk will take place on Wednesday, February 15th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Jelani Nelson (Harvard) will give us the final word on the J-L lemma: “Optimality of the Johnson-Lindenstrauss lemma”  (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

We consider the question: given integers n,d > 1, and some 0 < epsilon < 1, what is the minimum value of m such that for all n-point subsets X of d-dimensional Euclidean space, there exists an embedding f from X to m-dimensional Euclidean space with distortion at most 1+epsilon? We show that for nearly the full range of interest for the parameters n, d, and epsilon, the Johnson-Lindenstrauss lemma is tight: there exists an n-point subset of d-dimensional Euclidean space such that any such f must have m in Omega(epsilon^{-2} log n).

Joint work with Kasper Green Larsen (Aarhus University).

TCS+ talk: Wednesday, February 1st — Nikhil Bansal, Eindhoven University of Technology

Our first talk of 2017 will take place next Wednesday, February 1st at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Nikhil Bansal from Eindhoven University of Technology will speak about his beautiful very recent work (with Dadush and Garg) “Algorithmic discrepancy beyond partial coloring” (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Last but not least: we welcome a new member to the TCS+ organization team, Ilya Razenshteyn from MIT. It’s always great to see TCS+ growing and receiving your ongoing support. We hope to serve the TCS community at best we can!

Partial coloring is one of the most widely used methods in combinatorial discrepancy. However, in many cases it leads to sub-optimal bounds as the partial coloring step must be iterated a logarithmic number of times, and the errors can add up arbitrarily. I will describe a recent algorithmic approach that overcomes these limitations and can be used to efficiently find colorings matching the so-called Banaszczyk’s bound for many problems.

Based on joint works with Daniel Dadush and Shashwat Garg.

TCS+ talk: Wednesday, December 7th — Jerry Li, MIT

The last TCS+ talk for 2016 will take place on Wednesday, December 7th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Jerry Li (MIT) will tell us about his work from FOCS’16 on “Robust Estimators in High Dimensions without the Computational Intractability”  (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

We study high-dimensional distribution learning in an agnostic setting where an adversary is allowed to arbitrarily corrupt an $\varepsilon$-fraction of the samples. Such questions have a rich history spanning statistics, machine learning and theoretical computer science. Even in the most basic settings, the only known approaches are either computationally inefficient or lose dimension-dependent factors in their error guarantees. This raises the following question: is high-dimensional agnostic distribution learning even possible, algorithmically?

In this talk, we present the first computationally efficient algorithms with dimension-independent error guarantees for agnostically learning several fundamental classes of high-dimensional distributions: (1) a single Gaussian, (2) a product distribution on the hypercube, (3) mixtures of two product distributions (under a natural balancedness condition), and (4) mixtures of spherical Gaussians. Our algorithms achieve error that is independent of the dimension, and in many cases scales nearly optimally with the fraction of adversarially corrupted samples. Moreover, we develop a general recipe for detecting and correcting corruptions in high-dimensions, that may be applicable to many other problems.

Based on joint work with Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Ankur Moitra, and Alistair Stewart

TCS+ talk: Wednesday, November 23rd — Mark Zhandry, Princeton

Our next talk will take place on Wednesday, November 23rd at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Mark Zhandry (Princeton) will tell us about his work from Crypto’16 on “The Magic of ELFs”  (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

We introduce the notion of an Extremely Lossy Function (ELF). An ELF is a family of functions with an image size that is tunable anywhere from injective to having a polynomial-sized image. Moreover, for any efficient adversary, for a sufficiently large polynomial r (necessarily chosen to be larger than the running time of the adversary), the adversary cannot distinguish the injective case from the case of image size r.

We develop a handful of techniques for using ELFs, and show that such extreme lossiness is useful for instantiating random oracles in several settings. In particular, we show how to use ELFs to build secure point function obfuscation with auxiliary input, as well as polynomially-many hardcore bits for any one-way function. Such applications were previously known from strong knowledge assumptions — for example polynomially-many hardcore bits were only know from differing inputs obfuscation, a notion whose plausibility has been seriously challenged. We also use ELFs to build a simple hash function with output intractability, a new notion we define that may be useful for generating common reference strings.

Next, we give a construction of ELFs relying on the exponential hardness of the decisional Diffie-Hellman problem, which is plausible in pairing-based groups. Combining with the applications above, our work gives several practical constructions relying on qualitatively different — and arguably better — assumptions than prior works.

TCS+ talk: Wednesday, November 9th — Bo Waggoner, U. Penn

Our next talk will take place on Wednesday, November 9th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Bo Waggoner (U. Penn) will tell us about joint work with Yiling Chen from FOCS’16 on “Informational Substitutes”  (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

 When can pieces of information be termed substitutes or complements? I’ll propose an answer based on “value of information” theory and show that these definitions are fundamental to some problems both in game theory and algorithms. The “main result” shows that substitutes characterize good “efficient market hypothesis” equilibria in prediction markets; another shows that substitutes characterize efficient approximation algorithms for information acquisition.

The goal is to have a tutorial feel, introducing some fun concepts and tools that are less familiar to theory audiences. It will be heavy on concepts and definitions, light on technicalities.

TCS+ talk: Wednesday, October 26th — Claire Mathieu, ENS Paris

Our next talk will take place on Wednesday, October 26th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Claire Mathieu (ENS Paris) will tell us about her work “Local search yields approximation schemes for k-means and k-median in Euclidean and minor-free metrics” from FOCS’16 (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

We give the first polynomial-time approximation schemes (PTASs) for the following problems: (1) uniform facility location in edge-weighted planar graphs; (2) k-median and k-means in edge-weighted planar graphs; (3) k-means in Euclidean spaces of bounded dimension. Our first and second results extend to minor-closed families of graphs. All our results extend to cost functions that are the p-th power of the shortest-path distance. The algorithm is local search where the local neighborhood of a solution S consists of all solutions obtained from S by removing and adding 1/ϵO(1) centers.

Joint work with Vincent Cohen-Addad and Philip N. Klein.