TCS+

A carbon-free dissemination of ideas across the globe.

TCS+ talk: Wednesday, May 27 — Rahul Ilango, MIT

The next TCS+ talk will take place this coming Wednesday, May 27th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Rahul Ilango from MIT will speak about “Is it (NP) hard to distinguish order from chaos?” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The link to the YouTube livestream will also be posted on our
website
on the day of the talk, so people who did not sign up will still be able to watch the talk live.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: The Minimum Circuit Size Problem (MCSP) roughly asks what the “complexity” of a given string is. Informally, one can think of this as determining the degree of “computational order” a string has.

In the past several years, there has been a resurgence of interest in MCSP. A series of exciting results have begun unraveling what looks to be a fascinating story. This story already reveals deep connections between MCSP and a growing list of fields, including cryptography, learning theory, structural complexity theory, average-case complexity, and circuit complexity. As an example, Santhanam recently proved a conditional equivalence between the complexity of MCSP and the existence of one-way functions.

This talk is split into two parts. The first part is a broad introduction to MCSP, answering the following questions: What is this problem? Why is it interesting? What do we know so far, and where might the story go next? The second part discusses recent joint work with Bruno Loff and Igor Oliveira showing that the “multi-output version” of MCSP is NP-hard.

 

TCS+ talk: Wednesday, May 20 — Mark Bun, Boston University

The next TCS+ talk will take place this coming Wednesday, May 20th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Mark Bun from Boston University will speak about “An Equivalence between Private Classification and Online Predictability” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The link to the YouTube livestream will also be posted on our website on the day of the talk, so people who did not sign up will still be able to watch the talk live.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: We prove that every concept class with finite Littlestone dimension can be learned by an (approximate) differentially-private algorithm. The converse direction was shown in recent work of Alon, Livni, Malliaris, and Moran, STOC ’19. Together these two results show that a class of functions is privately learnable if and only if it is learnable in the mistake-bound model of online learning. To establish our result, we introduce “global stability,” a new notion of algorithmic stability for learning algorithms that we show can always be satisfied when learning classes of finite Littlestone dimension.

 

TCS+ talk: Wednesday, May 13 — Sahil Singla, Princeton University and IAS

The next TCS+ talk will take place this coming Wednesday, May 13th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Sahil Singla from Princeton University and IAS will speak about “Online Vector Balancing and Geometric Discrepancy” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The link to the YouTube livestream will also be posted on our website on the day of the talk, so people who did not sign up will still be able to watch the talk live.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: We consider an online vector balancing question where T vectors, chosen from an arbitrary distribution over [-1,1]^n, arrive one-by-one and must be immediately given a \{+1,-1\} sign. The goal is to keep the discrepancy—the \ell_{\infty}-norm of any signed prefix-sum—as small as possible. A concrete example of this question is the online interval discrepancy problem where T points are sampled one-by-one uniformly in the unit interval [0,1], and the goal is to immediately color them \{+1,-1\} such that every sub-interval remains always nearly balanced. As random coloring incurs \Omega(T^{1/2}) discrepancy, while the worst-case offline bounds are \Theta(\sqrt{n \log (T/n)}) for vector balancing and 1 for interval balancing, a natural question is whether one can (nearly) match the offline bounds in the online setting for these problems. One must utilize the stochasticity as in the worst-case scenario it is known that discrepancy is \Omega(T^{1/2}) for any online algorithm.

In this work, we introduce a new framework that allows us to handle online vector balancing even when the input distribution has dependencies across coordinates. In particular, this lets us obtain a \textrm{poly}(n, \log T) bound for online vector balancing under arbitrary input distributions, and a \textrm{poly}\log (T) bound for online interval discrepancy. Our framework is powerful enough to capture other well-studied geometric discrepancy problems; e.g., we obtain a \textrm{poly}(\log^d (T)) bound for the online d-dimensional Tusnády’s problem. All our bounds are tight up to polynomial factors.

A key new technical ingredient in our work is an anti-concentration inequality for sums of pairwise uncorrelated random variables, which might also be of independent interest.

Based on joint works with Nikhil Bansal, Haotian Jiang, Janardhan Kulkarni, and Makrand Sinha. Part of this work appears in STOC 2020 and is available at https://arxiv.org/abs/1912.03350

 

TCS+ talk: Wednesday, May 6 — Nathan Klein, University of Washington

The next TCS+ talk will take place this coming Wednesday, May 6th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Nathan Klein from the University of Washington will speak about “An improved approximation algorithm for TSP in the half integral case” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The link to the YouTube livestream will also be posted on our
website
on the day of the talk, so people who did not sign up will still be able to watch the talk live.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: A classic result from Christofides in the 70s tells us that a fast algorithm for the traveling salesperson problem (TSP) exists which returns a solution at most 3/2 times worse than the optimal. Since then, however, no better approximation algorithm has been found. In this talk, I will give an overview of research towards the goal of beating 3/2 and will present the first sub-3/2 approximation algorithm for the special case of “half integral” TSP instances. These instances have received significant attention in part due to a conjecture from Schalekamp, Williamson and van Zuylen that they attain the integrality gap of the subtour polytope. If this conjecture is true, our work shows that the integrality gap of the polytope is bounded away from 3/2, giving hope for an improved approximation for the general case. This presentation is of joint work with Anna Karlin and Shayan Oveis Gharan.

TCS+ talk: Wednesday, April 29 — Sepideh Mahabadi, TTIC

The next TCS+ talk will take place this coming Wednesday, April 29th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Sepideh Mahabadi from TTIC will speak about “Non-Adaptive Adaptive Sampling in Turnstile Streams” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The link to the YouTube livestream will also be posted on our website on the day of the talk, so people who did not sign up will still be able to watch the talk live.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: Adaptive sampling is a useful algorithmic tool for data summarization problems in the classical centralized setting, where the entire dataset is available to the single processor performing the computation. Adaptive sampling repeatedly selects rows of an underlying n by d matrix A, where n \gg d, with probabilities proportional to their distances to the subspace of the previously selected rows. Intuitively, adaptive sampling seems to be limited to trivial multi-pass algorithms in the streaming model of computation due to its inherently sequential nature of assigning sampling probabilities to each row only after the previous iteration is completed. Surprisingly, we show this is not the case by giving the first one-pass algorithms for adaptive sampling on turnstile streams and using space \text{poly}(d,k,\log n), where k is the number of adaptive sampling rounds to be performed.

Our adaptive sampling procedure has a number of applications to various data summarization problems on turnstile streams that either improve state-of-the-art or have only been previously studied in the more relaxed row-arrival model. This includes column subset selection, subspace approximation, projective clustering, and volume maximization. We complement our volume maximization algorithmic results with lower bounds that are tight up to lower order terms, even for multi-pass algorithms. By a similar construction, we also obtain lower bounds for volume maximization in the row-arrival model, which we match with competitive upper bounds.

This is a joint work with Ilya Razenshteyn, David Woodruff, and Samson Zhou.

TCS+ talk: Wednesday, April 22 — Huacheng Yu, Princeton

The next TCS+ talk will take place this coming Wednesday, April 22nd at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Huacheng Yu from Princeton will speak about a “Nearly Optimal Static Las Vegas Succinct Dictionary” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. Due to security concerns, registration is required to attend the interactive talk. (The link to the YouTube livestream will also be posted on our website on the day of the talk, so people who did not sign up will still be able to watch the talk live.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: Given a set S of n (distinct) keys from key space [U], each associated with a value from \Sigma, the static dictionary problem asks to preprocess these (key, value) pairs into a data structure, supporting value-retrieval queries: for any given x \in [U], valRet(x) must return the value associated with x if x is in S, or return “N/A” if x is not in S. The special case where |\Sigma|=1 is called the membership problem. The “textbook” solution is to use a hash table, which occupies linear space and answers each query in constant time. On the other hand, the minimum possible space to encode all (key, value) pairs is only \textrm{OPT} := \lg \binom{U}{n} + n \lg |\Sigma| bits, which could be much less.

In this talk, we will talk about a randomized dictionary data structure using \textrm{OPT}+\textrm{poly}\lg n+O(\lg\lg\cdots\lg U) bits of space and expected constant query time, assuming the query algorithm have access to an external data-independent lookup table of size n^{0.001}. Previously, even for membership queries and when U\leq n^{O(1)}, the best known data structure with constant query time requires \textrm{OPT}+n/\textrm{poly} \lg n bits of space (due to Pagh [Pagh’01] and Pătraşcu [Pat’08]). It has O(\lg n) query time when the space is at most \textrm{OPT}+n^{0.999}.

TCS+ talk: Wednesday, April 8 — Ramon van Handel, Princeton

The next TCS+ talk will take place this coming Wednesday, April 8th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Ramon van Handel from Princeton will speak about how “Rademacher type and Enflo type coincide” (abstract below).

You can reserve a spot as an individual or a group to join us live by signing up on the online form. (The link will also be posted on our website on the day of the talk, so people who did not sign up will still be able to join, until the maximum capacity of 300 seats is reached.) As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: In Banach space theory, Rademacher type is an important invariant that controls many geometric and probabilistic properties of normed spaces. It is of considerable interest in various settings to understand to what extent such powerful tools extend to general metric spaces. A natural metric analogue of Rademacher type was proposed by Enflo in the 1960s-70s, and has found a number of interesting applications. Despite much work in the intervening years, however, the relationship between Rademacher type and Enflo type has remained unclear. This basic question is settled in joint work with Paata Ivanisvili and Sasha Volberg: in the setting of Banach spaces, Rademacher type and Enflo type coincide. The proof is based on a very simple but apparently novel insight on how to prove dimension-free inequalities on the Boolean cube. I will not assume any prior background in Banach space theory in the talk.