Home | People | Events | Links | Positions

Mathematics of Information
Seminar    (CS286)

4:00-5:00 pm in Annenberg 314
(except where otherwise noted)

October 2, 2018
Ben Lee Volk
Algebraic Complexity Theory: Lower bounds and Derandomization (1st of 2 parts)
    Algebraic Complexity theory is a subfield of complexity theory, studying computational problems of algebraic flavor such as multiplying matrices, computing the determinant, and more generally, computing polynomials over a field, where the complexity is measured by the number of algebraic operations needed to compute it.
    Two basic open problems in the area are (1) proving superpolynomial circuit lower bounds for explicit polynomials (which is the algebraic analog to the P vs. NP problem), and (2) designing deterministic efficient algorithms for the Polynomial Identity Testing problem, that asks to determine, given a circuit computing a polynomial, whether it computes the zero polynomial. Studying these questions often leads to beautiful mathematical questions of combinatorial, algebraic and geometric nature.
    In this talk, we will discuss several old and new results in the area, explore the mysterious, bidirectional and intricate connections between the two problems above, and give some open problems.

October 9, 2018
Ben Lee Volk
Algebraic Complexity Theory: Lower bounds and Derandomization (2nd of 2 parts)

October 16, 2018
Alessandro Zocca
Hard-core interactions on graphs
    In this talk I will introduce the hard-core model with Metropolis transition probabilities on finite graphs and its multi-state generalization. Motivated by the study of random-access networks performance, I will focus on the low-temperature regime and study the asymptotic behavior of these interacting particle systems by looking at first hitting times between stable states and mixing times. In particular, I will show how the order-of-magnitude of these hitting and mixing times depends on the underlying graph structure and derive precise asymptotics for various regular lattices. These results have been obtained extending the so-called "pathwise approach" developed in the statistical physics literature to study metastability phenomena and yielded a rigorous understanding of the root cause for poor delay performance of random-access networks.

October 23, 2018
Alessandro Achille (UCLA)
Complexity, Noise, and Emergent Properties of Learning Deep Representations
    I will show that Information Theoretic quantities control and describe a large part of the training process of Deep Neural Networks, and can be used to explain how properties, such as invariance to nuisance variability and disentanglement of semantic factors, emerge naturally in the learned representation. The resulting theory has connections with several fields ranging from algorithmic complexity to variational inference. This framework not only predicts the asymptotic behavior of deep networks, but also shows that the initial learning transient has a large irreversible effect on the outcome of the training, which gives rise to critical learning periods akin to those observed in biological systems. This urges us to study the complex, and so far neglected, initial phase of learning.

October 30, 2018
Omer Tamuz
Combining and Comparing Experiments
    A classical statistical model of hypothesis testing is the Blackwell Le Cam experiment, in which an observer makes inferences regarding the veracity of an hypothesis by observing a random outcome whose distribution depends on whether or not the hypothesis holds. We address a number of natural, classical questions: when does one experiment have more information than another? How can we quantify the amount of information an experiment holds? Our results include an answer to an old question of Blackwell, as well as a novel axiomatization of Kullback-Leibler divergence.
    Joint with Xiaosheng Mu, Luciano Pomatto and Philipp Strack

November 27, 2018
Rasmus Kyng (Harvard)
Approximate Gaussian Elimination for Laplacians
    We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian matrix by a sparse LU factorization. We compute this factorization by subsampling standard Gaussian elimination. This gives the simplest known nearly linear time solver for Laplacian equations. The crux of our proof is the use of matrix martingales to analyze the algorithm. Finally, we will take a look at ongoing efforts to implement robust and fast Laplacian solvers based on these ideas.
    Based on joint work with Sushant Sachdeva and Daniel Spielman.

December 4, 2018
Jan Hazla (MIT)
Reasoning in Bayesian Opinion Exchange Networks is Computationally Hard
    Bayesian models of opinion exchange are extensively studied in economics, dating back to the work of Aumann on the agreement theorem. An important class of such models features agents arranged on a network (representing, e.g., social interactions), with the network structure determining which agents communicate with each other. It is often argued that the Bayesian computations needed by agents in such models are difficult, but prior to our work there were no rigorous arguments for such hardness.
    We consider a well-studied model where fully rational agents receive private signals indicative of an unknown state of the world. Then, they repeatedly announce the state of the world they consider most likely to their neighbors, at the same time updating their beliefs based on their neighbors' announcements.
    I will discuss our complexity-theoretic results establishing hardness of agents' computations in this model. Specifically, we show that these computations are NP-hard and extend this result to PSPACE-hardness. We show hardness not only for exact computations, but also that it is computationally difficult even to approximate the rational opinion in any meaningful way.
    Joint work with Ali Jadbababie, Elchanan Mossel and Amin Rahimian.

January 22, 2019
Gautam Kamath
Privately Learning High-Dimensional Distributions
    We present novel, computationally efficient, and differentially private algorithms for two fundamental high-dimensional learning problems: learning a multivariate Gaussian in R^d and learning a product distribution in {0,1}^d in total variation distance. The sample complexity of our algorithms nearly matches the sample complexity of the optimal non-private learners for these tasks in a wide range of parameters. Thus, our results show that private comes essentially for free for these problems, providing a counterpoint to the many negative results showing that privacy is often costly in high dimensions. Our algorithms introduce a novel technical approach to reducing the sensitivity of the estimation procedure that we call recursive private preconditioning, which may find additional applications.
    Based on joint work with Jerry Li, Vikrant Singhal, and Jonathan Ullman.

February 26, 2019
    CMI Special Lunch Seminar
            ★ 12 noon ★
            ★ Room 213 ★

Brent Waters (UT Austin)
Cryptographic Code Obfuscation
    A cryptographic code obfuscator takes as input the description of a program P and outputs a program P' that is functionally equivalent to the original, but should hide the inner workings of the original program to the maximal extent possible. Until very recently no general purpose obfuscators existed that were based on cryptographically hard problems; however, in 2013 researchers proposed the first candidate obfuscator for "indistinguishability obfuscation". Since then there has been a tremendous interest in the subject from the cryptography community.
    In this talk I will first introduce the concept and define indistinguishability obfuscation. Then I will show techniques for building cryptographic applications from it. Finally, I will conclude with discussing the challenging open problems in the area.

February 26, 2019
Miles Lopes (UC Davis)
Two New Bootstrap Methods for High-Dimensional and Large-Scale Data
    Bootstrap methods are among the most broadly applicable tools for statistical inference and uncertainty quantification. Although these methods have an extensive literature, much remains to be understood about their applicability in modern settings, where observations are high-dimensional, or where the quantity of data outstrips computational resources. In this talk, I will present a couple of new bootstrap methods that are tailored to these settings. First, I will discuss the topic of "spectral statistics" arising from high-dimensional sample covariance matrices, and describe a method for approximating the distributions of such statistics. Second, in the context of large-scale data, I will discuss a more unconventional application of the bootstrap -- dealing with the tradeoff between accuracy and computational cost for randomized numerical linear algebra. This will include joint work from a paper with Alexander Aue and Andrew Blandino; https://arxiv.org/abs/1709.08251 (to appear at Biometrika), and a paper with Michael Mahoney, and Shusen Wang; https://arxiv.org/abs/1708.01945 (to appear at JMLR).

March 5, 2019
★ NOTE: will be held in
Hall Auditorium (135 Gates Thomas) ★
Gitta Kutyniok (TU Berlin)
Approximation Theory meets Deep Learning
    Despite the outstanding success of deep neural networks in real-world applications, most of the related research is empirically driven and a mathematical foundation is almost completely missing. One central task of a neural network is to approximate a function, which for instance encodes a classification task. In this talk, we will be concerned with the question, how well a function can be approximated by a neural network with sparse connectivity. Using methods from approximation theory and applied harmonic analysis, we will derive a fundamental lower bound on the sparsity of a neural network. By explicitly constructing neural networks based on certain representation systems, so-called $\alpha$-shearlets, we will then demonstrate that this lower bound can in fact be attained. Finally, we present numerical experiments, which surprisingly show that already the standard backpropagation algorithm generates deep neural networks obeying those optimal approximation rates.

March 12, 2019
Nisheeth Vishnoi (Yale)
Towards Controlling Bias in AI Systems
    Powerful AI systems, which are driven by machine learning tools, are increasingly controlling various aspects of modern society: from social interactions (e.g., Facebook, Twitter, Google, YouTube), economics (e.g., Uber, Airbnb, Banking), learning (e.g., Wikipedia, MOOCs), to governance (Judgements, Policing, Voting). These systems have a tremendous potential to change our lives for the better, but, via the ability to mimic and nudge human behavior, they also have the potential to be discriminatory, reinforce societal prejudices, and polarize opinions. Indeed, recent studies have demonstrated that these systems can be quite brittle and generally lack the required qualities to be deployed in various human-centric/societal contexts. The reason being that considerations such as fairness, explainability, accountability etc. have largely been an afterthought in the development of such AI systems.
    In this talk, I will outline our efforts towards incorporating some of the above-mentioned issues in a principled manner for core machine learning tasks such as classification, data summarization, ranking, personalization, and online advertisement. Our work leads to new algorithms that have the ability to control and alleviate bias from their outputs, comes with provable guarantees, and often has low "price of fairness".
    Based on several joint works with Elisa Celis.

May 7, 2019
         ★ Room 213 ★
Alistair Sinclair (UC Berkeley)
topic TBA

May 14, 2019
         ★ Room 213 ★
Adam Smith (Boston University)
topic TBA


2018-19
CMI Faculty Lunches

by invitation


November 8, 2018
Venkat Chandrasekaran
Newton Polytopes and Relative Entropy Optimization
    We discuss how relative entropy is uniquely suited to optimization problems involving sparse polynomials. Our results connect to the work of Descartes (the rule of signs) and Khovanskii (the theory of fewnomials). The Newton polytope associated to a polynomial plays a central role in our development. This is joint work with Riley Murray and Adam Wierman.

December 13, 2018
Omer Tamuz
Equitable Voting Rules
    Joint with Laurent Bartholdi, Wade Hann-Caruthers, Maya Josyula and Leeat Yariv.
    Given n voters and two candidates A and B, a voting rule is a map from {A,B,abstain}^n to {A,B,tie}. An important example is majority, which has a desirable symmetry property: permuting the voters leaves the result unchanged. We study a weakening of this symmetry assumption, which allows for a far richer set of rules that still treat voters equally. We show that these rules can have very small---but not too small---winning coalitions (i.e., a set of voters that can control the election outcome).
    I will discuss the relation to various known results and open problems in the theory of finite groups. No previous knowledge required.

March 14, 2019
Peter Schröder
Shape from Metric
    Joint work with Albert Chern, Felix Knöppel and Ulrich Pinkall (all of TU Berlin Math)
    We study the isometric immersion problem for orientable surface triangle meshes endowed with only a metric: given the combinatorics of the mesh together with edge lengths, approximate an isometric immersion into R^3.
    To address this challenge we develop a discrete theory for surface immersions into R^3. It precisely characterizes a discrete immersion, up to subdivision and small perturbations. In particular our discrete theory correctly represents the topology of the space of immersions, i.e., the regular homotopy classes which represent its connected components. Our approach relies on unit quaternions to represent triangle orientations and to encode, in their parallel transport, the topology of the immersion. In unison with this theory we develop a computational apparatus based on a variational principle. Minimizing a non-linear Dirichlet energy optimally finds extrinsic geometry for the given intrinsic geometry and ensures low metric approximation error.
    We demonstrate our algorithm with a number of applications from mathematical visualization and art directed isometric shape deformation, which mimics the behavior of thin materials with high membrane stiffness.


April 11, 2019


    
    

May 9, 2019
Adam Wierman

    
    

June 13, 2019
Babak Hassibi