TCS+

A carbon-free dissemination of ideas across the globe.

TCS+ talk: Wednesday, September 19 — Avishay Tal, Simons Institute

The next TCS+ talk will take place this coming Wednesday, September 19th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00 UTC). Avishay Tal from the Simons Institute will speak about an “Oracle Separation of BQP and the Polynomial Hierarchy” (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: We present an oracle, A, relative to which \mathsf{BQP}^{A} is not contained in \mathsf{PH}^{A}.

Following the approach of Aaronson [STOC, 2010], our result is obtained by finding a distribution D over Boolean strings of length N such that:

  1. There exists a quantum algorithm that runs in time{\rm polylog}(N) and distinguishes betweenD and the uniform distribution over Boolean strings of lengthN.
  2. No Boolean circuit of quasi-polynomial size and constant depth can
    distinguish between D and the uniform distribution with advantage better than {\rm polylog}(N)/\sqrt{N}.

Joint work with Ran Raz.

 

Advertisements

Fall is Coming

Summer is over, and Fall is upon us. Trees changing color, pumpkins invading the Starbucks’ iced chai lattes, and a new exciting season of TCS+!

The focus here is the latter, and specifically to remind you to (1) block your calendar, (2) book a room on a fortnightly fashion, and (3) sit back with your favorite talk-watching snack and prepare for a treat: as customary, the TCS+ seminars will take place every other Wednesday, starting exactly two weeks from now (Wed. 19th September), at the by-now-standard time of 1pm EST (10am PST).

The first speakers lined up for the Fall are:

  • Sept. 19th: Avishay Tal (Simons Institute)
  • Oct. 3rd: (TBD)
  • Oct. 17th: C. Seshadri (UC Santa Cruz)
  • Oct. 31st: (TBD)
  • Nov. 14th: Urmila Mahadev (UC Berkeley)

An up-to-date calendar will always appear on our webpage, where you can also subscribe to our mailing-list and suggest talks.

As always, TCS+ welcomes feedback, be it on the technical or scientific aspects. Please help us continue the success of our seminars by attending, suggesting speakers, and spreading the word to those universities worldwide that would most benefit from watching our seminars.

Best wishes for the new academic year,
The organizers.

TCS+ talk: Wednesday, May 30th — Michael Kearns, UPenn

The next and last TCS+ talk for the Spring will take place this coming Wednesday, May 30th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Michael Kearns from UPenn will speak about “Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness” (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of pre-defined groups, and then ask for parity of some statistic of the classifier across these groups. Constraints of this form are susceptible to intentional or inadvertent “fairness gerrymandering”, in which a classifier appears to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes. We propose instead to demand statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. This interpolates between statistical definitions of fairness and recently proposed individual notions of fairness, but raises several computational challenges. It is no longer clear how to audit a fixed classifier to see if it satisfies such a strong definition of fairness.

We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses. But it also raises the possibility of applying empirically successful machine learning methods to the problem.

We then derive two algorithms that provably converge to the best fair classifier, given access to oracles which can solve the agnostic learning problem. The algorithms are based on a formulation of subgroup fairness as a two-player zero-sum game between a Learner and an Auditor. Our first algorithm provably converges in a polynomial number of steps. Our second algorithm enjoys only provably asymptotic convergence, but has the merit of simplicity and faster per-step computation. We implement the simpler algorithm using linear regression as a heuristic oracle, and show that we can effectively both audit and learn fair classifiers on real datasets.

Joint work with Seth Neel, Aaron Roth, and Zhiwei Steven Wu.

TCS+ talk: Wednesday, May 23rd — Leonard Schulman, Caltech

The next TCS+ talk will take place this coming Wednesday, May 23rd at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Leonard Schulman from Caltech will speak about “Explicit Binary Tree Codes with Polylogarithmic Size Alphabet” (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: Tree codes are “real time” or “causal” error-correcting codes. They are known to exist but explicit construction has been a longstanding open problem. We report on progress on this problem.

For every constant delta we give an explicit binary tree code with distance delta and alphabet size poly(log n), where n is the depth of the tree. This is the first improvement over a two-decade-old construction that has an exponentially larger alphabet of size poly(n).

As part of the analysis, we prove a bound on the number of positive integer roots a real polynomial can have in terms of its sparsity with respect to the Newton basis–a result of independent interest.

Joint work with Gil Cohen (Princeton) and Bernhard Haeupler (CMU)

TCS+ talk: Wednesday, April 25th — Danupon Nanongkai, KTH

The next TCS+ talk will take place this coming Wednesday, April 25th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Danupon Nanongkai from KTH will speak about “Distributed All-Pairs Shortest Paths, Exactly” (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: I will present the ~O(n^{5/4})-time distributed algorithm for computing all-pairs shortest paths exactly by Huang, Nanongkai, and Saranurak (FOCS 2017; https://arxiv.org/abs/1708.03903). The algorithm is fairly simple, and the talk will cover necessary backgrounds. I will also briefly survey recent progresses and some open problems in the field of distributed graph algorithms, where this work lies in.

TCS+ talk: Wednesday, April 11th — Shay Moran, IAS

The next TCS+ talk will take place this coming Wednesday, April 11th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Shay Moran from the IAS will speak about “On the expressiveness of comparison queries” (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: Comparisons are a classical and well studied algorithmic tool that is used in a variety of contexts and applications.  We will discuss two manifestations of the expressiveness of these queries in machine learning and complexity theory (a more detailed overview is given below). Both manifestations are based on the notion of “inference dimension” that can be viewed as another instance of the fruitful link between machine learning and discrete mathematics — a link dating back to the discovery of the VC dimension.

Active classification with comparison queries.  
Active learning is a model for semi-supervised learning that captures situations in which unlabeled data is abundant and manually labelling it is expensive. We consider an extension of active learning in which the learning algorithm may ask the annotator to compare the distances of two examples from the boundary of their label-class. For example, in a recommendation system application (say for restaurants), the annotator may be asked whether she liked or disliked a specific restaurant (a label query); or which one of two restaurants did she like more (a comparison query).
We prove that the usage of comparison queries leads to an exponential improvement in the query complexity of some well studied problems. Specifically, for the class of half-spaces, we show that under natural assumptions, such as large margin or bounded bit-description of the input examples, it is possible to reveal all the labels of a sample of size n using approximately O(logn) queries.
Nearly optimal linear decision trees for k-SUM and related problems. 
We use the above mentioned techniques to construct linear decision trees for a variety of decision problems in combinatorics and discrete geometry. For example, for any constant k, we construct linear decision trees that solve the k-SUM problem on n elements using O(n log n) linear queries. Moreover, the queries we use are comparison queries, which compare the sums of two k-subsets; when viewed as linear queries, comparison queries are 2k-sparse and have only {−1,+1} coefficients.
We give similar constructions for sorting sumsets A+B and for solving the SUBSET-SUM problem, both with optimal number of queries, up to poly-logarithmic terms.
Based on joint works with Daniel Kane, Shachar Lovett, and Jiapeng Zhang.

TCS+ talk: Wednesday, March 28th — Artur Czumaj, University of Warwick

The next TCS+ talk will take place this coming Wednesday, March 28th at 1:00 PM Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 18:00 UTC). Artur Czumaj from the University of Warwick will speak about “Round Compression for Parallel Matching Algorithms” (abstract below).

Please make sure you reserve a spot for your group to join us live by signing up on the online form. As usual, for more information about the TCS+ online seminar series and the upcoming talks, or to suggest a possible topic or speaker, please see the website.

Abstract: For over a decade now we have been witnessing the success of massive parallel computation (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms?

A prominent example here is the fundamental graph problem of finding maximum matching. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in O(log n) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. showed that if each machine has n^{1+Ω(1)} memory, this problem can also be solved 2-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(log n) rounds once we enter the near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power.

In this talk, we finally show how to refute that perplexing possibility. That is, we break the above O(log n) round complexity bound even in the case of slightly sublinear memory per machine. In fact, our improvement here is almost exponential: we are able to deliver a (2+ϵ) approximation to maximum matching, for any fixed constant ϵ>0, in O((loglog n)^2) rounds.

This is a joint work with Jakub Łącki, Aleksander Mądry, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski.